2025-05-23 00:00:07.456039 | Job console starting 2025-05-23 00:00:07.490717 | Updating git repos 2025-05-23 00:00:07.594109 | Cloning repos into workspace 2025-05-23 00:00:07.779415 | Restoring repo states 2025-05-23 00:00:07.802223 | Merging changes 2025-05-23 00:00:07.802241 | Checking out repos 2025-05-23 00:00:08.061183 | Preparing playbooks 2025-05-23 00:00:08.729887 | Running Ansible setup 2025-05-23 00:00:14.193549 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-05-23 00:00:15.120508 | 2025-05-23 00:00:15.120666 | PLAY [Base pre] 2025-05-23 00:00:15.136739 | 2025-05-23 00:00:15.136876 | TASK [Setup log path fact] 2025-05-23 00:00:15.156145 | orchestrator | ok 2025-05-23 00:00:15.173248 | 2025-05-23 00:00:15.173384 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-05-23 00:00:15.203975 | orchestrator | ok 2025-05-23 00:00:15.215959 | 2025-05-23 00:00:15.216071 | TASK [emit-job-header : Print job information] 2025-05-23 00:00:15.257277 | # Job Information 2025-05-23 00:00:15.257467 | Ansible Version: 2.16.14 2025-05-23 00:00:15.257508 | Job: testbed-deploy-stable-in-a-nutshell-ubuntu-24.04 2025-05-23 00:00:15.257547 | Pipeline: periodic-midnight 2025-05-23 00:00:15.257574 | Executor: 521e9411259a 2025-05-23 00:00:15.257598 | Triggered by: https://github.com/osism/testbed 2025-05-23 00:00:15.257622 | Event ID: 28135cf2e0a04c0183718d6682cb246e 2025-05-23 00:00:15.264975 | 2025-05-23 00:00:15.265088 | LOOP [emit-job-header : Print node information] 2025-05-23 00:00:15.377274 | orchestrator | ok: 2025-05-23 00:00:15.377528 | orchestrator | # Node Information 2025-05-23 00:00:15.377566 | orchestrator | Inventory Hostname: orchestrator 2025-05-23 00:00:15.377592 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-05-23 00:00:15.377614 | orchestrator | Username: zuul-testbed04 2025-05-23 00:00:15.377635 | orchestrator | Distro: Debian 12.11 2025-05-23 00:00:15.377664 | orchestrator | Provider: static-testbed 2025-05-23 00:00:15.377689 | orchestrator | Region: 2025-05-23 00:00:15.377710 | orchestrator | Label: testbed-orchestrator 2025-05-23 00:00:15.377730 | orchestrator | Product Name: OpenStack Nova 2025-05-23 00:00:15.377749 | orchestrator | Interface IP: 81.163.193.140 2025-05-23 00:00:15.398824 | 2025-05-23 00:00:15.398985 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-05-23 00:00:15.827725 | orchestrator -> localhost | changed 2025-05-23 00:00:15.854508 | 2025-05-23 00:00:15.855085 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-05-23 00:00:16.870391 | orchestrator -> localhost | changed 2025-05-23 00:00:16.884773 | 2025-05-23 00:00:16.884888 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-05-23 00:00:17.180434 | orchestrator -> localhost | ok 2025-05-23 00:00:17.187182 | 2025-05-23 00:00:17.187284 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-05-23 00:00:17.207499 | orchestrator | ok 2025-05-23 00:00:17.224473 | orchestrator | included: /var/lib/zuul/builds/512ad4498ee84ff7bed9a58524b24fdc/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-05-23 00:00:17.232170 | 2025-05-23 00:00:17.232266 | TASK [add-build-sshkey : Create Temp SSH key] 2025-05-23 00:00:18.335963 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-05-23 00:00:18.336153 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/512ad4498ee84ff7bed9a58524b24fdc/work/512ad4498ee84ff7bed9a58524b24fdc_id_rsa 2025-05-23 00:00:18.336194 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/512ad4498ee84ff7bed9a58524b24fdc/work/512ad4498ee84ff7bed9a58524b24fdc_id_rsa.pub 2025-05-23 00:00:18.336221 | orchestrator -> localhost | The key fingerprint is: 2025-05-23 00:00:18.336246 | orchestrator -> localhost | SHA256:X95t/Y/SZKsikCVp9+rjBLGgWfiUkm3Lx1Gwyr9Awa8 zuul-build-sshkey 2025-05-23 00:00:18.336269 | orchestrator -> localhost | The key's randomart image is: 2025-05-23 00:00:18.336301 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-05-23 00:00:18.336323 | orchestrator -> localhost | | ... | 2025-05-23 00:00:18.336346 | orchestrator -> localhost | | = . o | 2025-05-23 00:00:18.336367 | orchestrator -> localhost | | + X +. | 2025-05-23 00:00:18.336388 | orchestrator -> localhost | | @ B++o | 2025-05-23 00:00:18.336420 | orchestrator -> localhost | | o B.*=S. . | 2025-05-23 00:00:18.336449 | orchestrator -> localhost | | . +o. ..o .o..| 2025-05-23 00:00:18.336470 | orchestrator -> localhost | | E ..... .+..+| 2025-05-23 00:00:18.336490 | orchestrator -> localhost | | . o+ . . oo.| 2025-05-23 00:00:18.336511 | orchestrator -> localhost | | .ooo ..o. +| 2025-05-23 00:00:18.336531 | orchestrator -> localhost | +----[SHA256]-----+ 2025-05-23 00:00:18.336586 | orchestrator -> localhost | ok: Runtime: 0:00:00.652714 2025-05-23 00:00:18.343799 | 2025-05-23 00:00:18.343894 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-05-23 00:00:18.363436 | orchestrator | ok 2025-05-23 00:00:18.375592 | orchestrator | included: /var/lib/zuul/builds/512ad4498ee84ff7bed9a58524b24fdc/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-05-23 00:00:18.384550 | 2025-05-23 00:00:18.384647 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-05-23 00:00:18.397542 | orchestrator | skipping: Conditional result was False 2025-05-23 00:00:18.406033 | 2025-05-23 00:00:18.406144 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-05-23 00:00:18.974630 | orchestrator | changed 2025-05-23 00:00:18.981289 | 2025-05-23 00:00:18.981389 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-05-23 00:00:19.247673 | orchestrator | ok 2025-05-23 00:00:19.253742 | 2025-05-23 00:00:19.253842 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-05-23 00:00:19.641081 | orchestrator | ok 2025-05-23 00:00:19.649126 | 2025-05-23 00:00:19.649296 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-05-23 00:00:20.083325 | orchestrator | ok 2025-05-23 00:00:20.095611 | 2025-05-23 00:00:20.095711 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-05-23 00:00:20.118583 | orchestrator | skipping: Conditional result was False 2025-05-23 00:00:20.125184 | 2025-05-23 00:00:20.125274 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-05-23 00:00:20.562524 | orchestrator -> localhost | changed 2025-05-23 00:00:20.575548 | 2025-05-23 00:00:20.575659 | TASK [add-build-sshkey : Add back temp key] 2025-05-23 00:00:20.885556 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/512ad4498ee84ff7bed9a58524b24fdc/work/512ad4498ee84ff7bed9a58524b24fdc_id_rsa (zuul-build-sshkey) 2025-05-23 00:00:20.885771 | orchestrator -> localhost | ok: Runtime: 0:00:00.013425 2025-05-23 00:00:20.892613 | 2025-05-23 00:00:20.892701 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-05-23 00:00:21.308450 | orchestrator | ok 2025-05-23 00:00:21.314974 | 2025-05-23 00:00:21.315101 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-05-23 00:00:21.350235 | orchestrator | skipping: Conditional result was False 2025-05-23 00:00:21.417032 | 2025-05-23 00:00:21.417166 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-05-23 00:00:21.792899 | orchestrator | ok 2025-05-23 00:00:21.806105 | 2025-05-23 00:00:21.806223 | TASK [validate-host : Define zuul_info_dir fact] 2025-05-23 00:00:21.834554 | orchestrator | ok 2025-05-23 00:00:21.846043 | 2025-05-23 00:00:21.846170 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-05-23 00:00:22.122745 | orchestrator -> localhost | ok 2025-05-23 00:00:22.130526 | 2025-05-23 00:00:22.130643 | TASK [validate-host : Collect information about the host] 2025-05-23 00:00:23.291693 | orchestrator | ok 2025-05-23 00:00:23.310303 | 2025-05-23 00:00:23.310511 | TASK [validate-host : Sanitize hostname] 2025-05-23 00:00:23.377433 | orchestrator | ok 2025-05-23 00:00:23.387241 | 2025-05-23 00:00:23.387558 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-05-23 00:00:24.307300 | orchestrator -> localhost | changed 2025-05-23 00:00:24.321775 | 2025-05-23 00:00:24.321925 | TASK [validate-host : Collect information about zuul worker] 2025-05-23 00:00:24.865562 | orchestrator | ok 2025-05-23 00:00:24.885520 | 2025-05-23 00:00:24.885675 | TASK [validate-host : Write out all zuul information for each host] 2025-05-23 00:00:26.448540 | orchestrator -> localhost | changed 2025-05-23 00:00:26.475757 | 2025-05-23 00:00:26.476926 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-05-23 00:00:26.847617 | orchestrator | ok 2025-05-23 00:00:26.854531 | 2025-05-23 00:00:26.854652 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-05-23 00:00:49.709475 | orchestrator | changed: 2025-05-23 00:00:49.711224 | orchestrator | .d..t...... src/ 2025-05-23 00:00:49.711317 | orchestrator | .d..t...... src/github.com/ 2025-05-23 00:00:49.711347 | orchestrator | .d..t...... src/github.com/osism/ 2025-05-23 00:00:49.711370 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-05-23 00:00:49.711392 | orchestrator | RedHat.yml 2025-05-23 00:00:49.725400 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-05-23 00:00:49.725428 | orchestrator | RedHat.yml 2025-05-23 00:00:49.725483 | orchestrator | = 1.53.0"... 2025-05-23 00:01:05.023185 | orchestrator | 00:01:05.022 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-05-23 00:01:06.362399 | orchestrator | 00:01:06.362 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-05-23 00:01:07.384199 | orchestrator | 00:01:07.383 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-05-23 00:01:08.869898 | orchestrator | 00:01:08.869 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.0.0... 2025-05-23 00:01:09.881677 | orchestrator | 00:01:09.881 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.0.0 (signed, key ID 4F80527A391BEFD2) 2025-05-23 00:01:11.341953 | orchestrator | 00:01:11.341 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-05-23 00:01:12.461314 | orchestrator | 00:01:12.461 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-05-23 00:01:12.461592 | orchestrator | 00:01:12.461 STDOUT terraform: Providers are signed by their developers. 2025-05-23 00:01:12.461604 | orchestrator | 00:01:12.461 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-05-23 00:01:12.461609 | orchestrator | 00:01:12.461 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-05-23 00:01:12.461929 | orchestrator | 00:01:12.461 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-05-23 00:01:12.461945 | orchestrator | 00:01:12.461 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-05-23 00:01:12.461954 | orchestrator | 00:01:12.461 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-05-23 00:01:12.461959 | orchestrator | 00:01:12.461 STDOUT terraform: you run "tofu init" in the future. 2025-05-23 00:01:12.462726 | orchestrator | 00:01:12.462 STDOUT terraform: OpenTofu has been successfully initialized! 2025-05-23 00:01:12.463168 | orchestrator | 00:01:12.462 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-05-23 00:01:12.463184 | orchestrator | 00:01:12.462 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-05-23 00:01:12.463189 | orchestrator | 00:01:12.462 STDOUT terraform: should now work. 2025-05-23 00:01:12.463193 | orchestrator | 00:01:12.462 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-05-23 00:01:12.463198 | orchestrator | 00:01:12.463 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-05-23 00:01:12.463203 | orchestrator | 00:01:12.463 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-05-23 00:01:12.720951 | orchestrator | 00:01:12.720 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed04/terraform` instead. 2025-05-23 00:01:12.914528 | orchestrator | 00:01:12.914 STDOUT terraform: Created and switched to workspace "ci"! 2025-05-23 00:01:12.914661 | orchestrator | 00:01:12.914 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-05-23 00:01:12.914681 | orchestrator | 00:01:12.914 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-05-23 00:01:12.914694 | orchestrator | 00:01:12.914 STDOUT terraform: for this configuration. 2025-05-23 00:01:13.120601 | orchestrator | 00:01:13.120 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed04/terraform` instead. 2025-05-23 00:01:13.225205 | orchestrator | 00:01:13.225 STDOUT terraform: ci.auto.tfvars 2025-05-23 00:01:13.228045 | orchestrator | 00:01:13.227 STDOUT terraform: default_custom.tf 2025-05-23 00:01:13.407496 | orchestrator | 00:01:13.407 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed04/terraform` instead. 2025-05-23 00:01:14.377013 | orchestrator | 00:01:14.376 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-05-23 00:01:14.915054 | orchestrator | 00:01:14.914 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-05-23 00:01:15.096872 | orchestrator | 00:01:15.096 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-05-23 00:01:15.096981 | orchestrator | 00:01:15.096 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-05-23 00:01:15.096995 | orchestrator | 00:01:15.096 STDOUT terraform:  + create 2025-05-23 00:01:15.097007 | orchestrator | 00:01:15.096 STDOUT terraform:  <= read (data resources) 2025-05-23 00:01:15.097018 | orchestrator | 00:01:15.096 STDOUT terraform: OpenTofu will perform the following actions: 2025-05-23 00:01:15.097356 | orchestrator | 00:01:15.097 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-05-23 00:01:15.097379 | orchestrator | 00:01:15.097 STDOUT terraform:  # (config refers to values not yet known) 2025-05-23 00:01:15.097392 | orchestrator | 00:01:15.097 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-05-23 00:01:15.097439 | orchestrator | 00:01:15.097 STDOUT terraform:  + checksum = (known after apply) 2025-05-23 00:01:15.097476 | orchestrator | 00:01:15.097 STDOUT terraform:  + created_at = (known after apply) 2025-05-23 00:01:15.097508 | orchestrator | 00:01:15.097 STDOUT terraform:  + file = (known after apply) 2025-05-23 00:01:15.097546 | orchestrator | 00:01:15.097 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.097584 | orchestrator | 00:01:15.097 STDOUT terraform:  + metadata = (known after apply) 2025-05-23 00:01:15.097639 | orchestrator | 00:01:15.097 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-05-23 00:01:15.097654 | orchestrator | 00:01:15.097 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-05-23 00:01:15.097687 | orchestrator | 00:01:15.097 STDOUT terraform:  + most_recent = true 2025-05-23 00:01:15.097744 | orchestrator | 00:01:15.097 STDOUT terraform:  + name = (known after apply) 2025-05-23 00:01:15.097755 | orchestrator | 00:01:15.097 STDOUT terraform:  + protected = (known after apply) 2025-05-23 00:01:15.097792 | orchestrator | 00:01:15.097 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.097823 | orchestrator | 00:01:15.097 STDOUT terraform:  + schema = (known after apply) 2025-05-23 00:01:15.097883 | orchestrator | 00:01:15.097 STDOUT terraform:  + size_bytes = (known after apply) 2025-05-23 00:01:15.097898 | orchestrator | 00:01:15.097 STDOUT terraform:  + tags = (known after apply) 2025-05-23 00:01:15.097928 | orchestrator | 00:01:15.097 STDOUT terraform:  + updated_at = (known after apply) 2025-05-23 00:01:15.097960 | orchestrator | 00:01:15.097 STDOUT terraform:  } 2025-05-23 00:01:15.098224 | orchestrator | 00:01:15.098 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-05-23 00:01:15.098264 | orchestrator | 00:01:15.098 STDOUT terraform:  # (config refers to values not yet known) 2025-05-23 00:01:15.098306 | orchestrator | 00:01:15.098 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-05-23 00:01:15.098336 | orchestrator | 00:01:15.098 STDOUT terraform:  + checksum = (known after apply) 2025-05-23 00:01:15.098391 | orchestrator | 00:01:15.098 STDOUT terraform:  + created_at = (known after apply) 2025-05-23 00:01:15.098406 | orchestrator | 00:01:15.098 STDOUT terraform:  + file = (known after apply) 2025-05-23 00:01:15.098443 | orchestrator | 00:01:15.098 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.098476 | orchestrator | 00:01:15.098 STDOUT terraform:  + metadata = (known after apply) 2025-05-23 00:01:15.098516 | orchestrator | 00:01:15.098 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-05-23 00:01:15.098551 | orchestrator | 00:01:15.098 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-05-23 00:01:15.099639 | orchestrator | 00:01:15.098 STDOUT terraform:  + most_recent = true 2025-05-23 00:01:15.099660 | orchestrator | 00:01:15.099 STDOUT terraform:  + name = (known after apply) 2025-05-23 00:01:15.099704 | orchestrator | 00:01:15.099 STDOUT terraform:  + protected = (known after apply) 2025-05-23 00:01:15.099740 | orchestrator | 00:01:15.099 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.099771 | orchestrator | 00:01:15.099 STDOUT terraform:  + schema = (known after apply) 2025-05-23 00:01:15.099823 | orchestrator | 00:01:15.099 STDOUT terraform:  + size_bytes = (known after apply) 2025-05-23 00:01:15.099840 | orchestrator | 00:01:15.099 STDOUT terraform:  + tags = (known after apply) 2025-05-23 00:01:15.099893 | orchestrator | 00:01:15.099 STDOUT terraform:  + updated_at = (known after apply) 2025-05-23 00:01:15.099905 | orchestrator | 00:01:15.099 STDOUT terraform:  } 2025-05-23 00:01:15.099973 | orchestrator | 00:01:15.099 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-05-23 00:01:15.099989 | orchestrator | 00:01:15.099 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-05-23 00:01:15.100038 | orchestrator | 00:01:15.099 STDOUT terraform:  + content = (known after apply) 2025-05-23 00:01:15.100096 | orchestrator | 00:01:15.100 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-23 00:01:15.100176 | orchestrator | 00:01:15.100 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-23 00:01:15.100196 | orchestrator | 00:01:15.100 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-23 00:01:15.100252 | orchestrator | 00:01:15.100 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-23 00:01:15.100311 | orchestrator | 00:01:15.100 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-23 00:01:15.100350 | orchestrator | 00:01:15.100 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-23 00:01:15.100364 | orchestrator | 00:01:15.100 STDOUT terraform:  + directory_permission = "0777" 2025-05-23 00:01:15.100398 | orchestrator | 00:01:15.100 STDOUT terraform:  + file_permission = "0644" 2025-05-23 00:01:15.100451 | orchestrator | 00:01:15.100 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-05-23 00:01:15.100499 | orchestrator | 00:01:15.100 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.100530 | orchestrator | 00:01:15.100 STDOUT terraform:  } 2025-05-23 00:01:15.100544 | orchestrator | 00:01:15.100 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-05-23 00:01:15.100580 | orchestrator | 00:01:15.100 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-05-23 00:01:15.100618 | orchestrator | 00:01:15.100 STDOUT terraform:  + content = (known after apply) 2025-05-23 00:01:15.100669 | orchestrator | 00:01:15.100 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-23 00:01:15.100700 | orchestrator | 00:01:15.100 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-23 00:01:15.100761 | orchestrator | 00:01:15.100 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-23 00:01:15.100774 | orchestrator | 00:01:15.100 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-23 00:01:15.100835 | orchestrator | 00:01:15.100 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-23 00:01:15.100868 | orchestrator | 00:01:15.100 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-23 00:01:15.100894 | orchestrator | 00:01:15.100 STDOUT terraform:  + directory_permission = "0777" 2025-05-23 00:01:15.100937 | orchestrator | 00:01:15.100 STDOUT terraform:  + file_permission = "0644" 2025-05-23 00:01:15.100950 | orchestrator | 00:01:15.100 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-05-23 00:01:15.100999 | orchestrator | 00:01:15.100 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.101012 | orchestrator | 00:01:15.100 STDOUT terraform:  } 2025-05-23 00:01:15.101053 | orchestrator | 00:01:15.101 STDOUT terraform:  # local_file.inventory will be created 2025-05-23 00:01:15.101066 | orchestrator | 00:01:15.101 STDOUT terraform:  + resource "local_file" "inventory" { 2025-05-23 00:01:15.101107 | orchestrator | 00:01:15.101 STDOUT terraform:  + content = (known after apply) 2025-05-23 00:01:15.101172 | orchestrator | 00:01:15.101 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-23 00:01:15.101206 | orchestrator | 00:01:15.101 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-23 00:01:15.101247 | orchestrator | 00:01:15.101 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-23 00:01:15.101296 | orchestrator | 00:01:15.101 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-23 00:01:15.101330 | orchestrator | 00:01:15.101 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-23 00:01:15.101379 | orchestrator | 00:01:15.101 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-23 00:01:15.101392 | orchestrator | 00:01:15.101 STDOUT terraform:  + directory_permission = "0777" 2025-05-23 00:01:15.101425 | orchestrator | 00:01:15.101 STDOUT terraform:  + file_permission = "0644" 2025-05-23 00:01:15.101451 | orchestrator | 00:01:15.101 STDOUT terraform:  + filename = "inventory.ci" 2025-05-23 00:01:15.101499 | orchestrator | 00:01:15.101 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.101511 | orchestrator | 00:01:15.101 STDOUT terraform:  } 2025-05-23 00:01:15.101555 | orchestrator | 00:01:15.101 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-05-23 00:01:15.101588 | orchestrator | 00:01:15.101 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-05-23 00:01:15.101619 | orchestrator | 00:01:15.101 STDOUT terraform:  + content = (sensitive value) 2025-05-23 00:01:15.101671 | orchestrator | 00:01:15.101 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-23 00:01:15.101702 | orchestrator | 00:01:15.101 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-23 00:01:15.101759 | orchestrator | 00:01:15.101 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-23 00:01:15.101786 | orchestrator | 00:01:15.101 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-23 00:01:15.101839 | orchestrator | 00:01:15.101 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-23 00:01:15.101871 | orchestrator | 00:01:15.101 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-23 00:01:15.101901 | orchestrator | 00:01:15.101 STDOUT terraform:  + directory_permission = "0700" 2025-05-23 00:01:15.101946 | orchestrator | 00:01:15.101 STDOUT terraform:  + file_permission = "0600" 2025-05-23 00:01:15.101958 | orchestrator | 00:01:15.101 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-05-23 00:01:15.102004 | orchestrator | 00:01:15.101 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.102046 | orchestrator | 00:01:15.101 STDOUT terraform:  } 2025-05-23 00:01:15.102080 | orchestrator | 00:01:15.102 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-05-23 00:01:15.102114 | orchestrator | 00:01:15.102 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-05-23 00:01:15.102185 | orchestrator | 00:01:15.102 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.102195 | orchestrator | 00:01:15.102 STDOUT terraform:  } 2025-05-23 00:01:15.102237 | orchestrator | 00:01:15.102 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-05-23 00:01:15.102309 | orchestrator | 00:01:15.102 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-05-23 00:01:15.102322 | orchestrator | 00:01:15.102 STDOUT terraform:  + attachment = (known after apply) 2025-05-23 00:01:15.102341 | orchestrator | 00:01:15.102 STDOUT terraform:  + availability_zone = "nova" 2025-05-23 00:01:15.102398 | orchestrator | 00:01:15.102 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.102411 | orchestrator | 00:01:15.102 STDOUT terraform:  + image_id = (known after apply) 2025-05-23 00:01:15.102448 | orchestrator | 00:01:15.102 STDOUT terraform:  + metadata = (known after apply) 2025-05-23 00:01:15.102496 | orchestrator | 00:01:15.102 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-05-23 00:01:15.102535 | orchestrator | 00:01:15.102 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.102563 | orchestrator | 00:01:15.102 STDOUT terraform:  + size = 80 2025-05-23 00:01:15.102574 | orchestrator | 00:01:15.102 STDOUT terraform:  + volume_type = "ssd" 2025-05-23 00:01:15.102584 | orchestrator | 00:01:15.102 STDOUT terraform:  } 2025-05-23 00:01:15.102646 | orchestrator | 00:01:15.102 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-05-23 00:01:15.102695 | orchestrator | 00:01:15.102 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-23 00:01:15.102728 | orchestrator | 00:01:15.102 STDOUT terraform:  + attachment = (known after apply) 2025-05-23 00:01:15.102763 | orchestrator | 00:01:15.102 STDOUT terraform:  + availability_zone = "nova" 2025-05-23 00:01:15.102790 | orchestrator | 00:01:15.102 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.102817 | orchestrator | 00:01:15.102 STDOUT terraform:  + image_id = (known after apply) 2025-05-23 00:01:15.102863 | orchestrator | 00:01:15.102 STDOUT terraform:  + metadata = (known after apply) 2025-05-23 00:01:15.102894 | orchestrator | 00:01:15.102 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-05-23 00:01:15.102927 | orchestrator | 00:01:15.102 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.102961 | orchestrator | 00:01:15.102 STDOUT terraform:  + size = 80 2025-05-23 00:01:15.102972 | orchestrator | 00:01:15.102 STDOUT terraform:  + volume_type = "ssd" 2025-05-23 00:01:15.102981 | orchestrator | 00:01:15.102 STDOUT terraform:  } 2025-05-23 00:01:15.103029 | orchestrator | 00:01:15.102 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-05-23 00:01:15.103089 | orchestrator | 00:01:15.103 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-23 00:01:15.103115 | orchestrator | 00:01:15.103 STDOUT terraform:  + attachment = (known after apply) 2025-05-23 00:01:15.103175 | orchestrator | 00:01:15.103 STDOUT terraform:  + availability_zone = "nova" 2025-05-23 00:01:15.103212 | orchestrator | 00:01:15.103 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.103255 | orchestrator | 00:01:15.103 STDOUT terraform:  + image_id = (known after apply) 2025-05-23 00:01:15.103266 | orchestrator | 00:01:15.103 STDOUT terraform:  + metadata = (known after apply) 2025-05-23 00:01:15.103313 | orchestrator | 00:01:15.103 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-05-23 00:01:15.103363 | orchestrator | 00:01:15.103 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.103381 | orchestrator | 00:01:15.103 STDOUT terraform:  + size = 80 2025-05-23 00:01:15.103390 | orchestrator | 00:01:15.103 STDOUT terraform:  + volume_type = "ssd" 2025-05-23 00:01:15.103397 | orchestrator | 00:01:15.103 STDOUT terraform:  } 2025-05-23 00:01:15.103442 | orchestrator | 00:01:15.103 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-05-23 00:01:15.103501 | orchestrator | 00:01:15.103 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-23 00:01:15.103531 | orchestrator | 00:01:15.103 STDOUT terraform:  + attachment = (known after apply) 2025-05-23 00:01:15.103541 | orchestrator | 00:01:15.103 STDOUT terraform:  + availability_zone = "nova" 2025-05-23 00:01:15.103577 | orchestrator | 00:01:15.103 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.103610 | orchestrator | 00:01:15.103 STDOUT terraform:  + image_id = (known after apply) 2025-05-23 00:01:15.103644 | orchestrator | 00:01:15.103 STDOUT terraform:  + metadata = (known after apply) 2025-05-23 00:01:15.103686 | orchestrator | 00:01:15.103 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-05-23 00:01:15.103719 | orchestrator | 00:01:15.103 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.103750 | orchestrator | 00:01:15.103 STDOUT terraform:  + size = 80 2025-05-23 00:01:15.103760 | orchestrator | 00:01:15.103 STDOUT terraform:  + volume_type = "ssd" 2025-05-23 00:01:15.103769 | orchestrator | 00:01:15.103 STDOUT terraform:  } 2025-05-23 00:01:15.103828 | orchestrator | 00:01:15.103 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-05-23 00:01:15.103879 | orchestrator | 00:01:15.103 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-23 00:01:15.103911 | orchestrator | 00:01:15.103 STDOUT terraform:  + attachment = (known after apply) 2025-05-23 00:01:15.103934 | orchestrator | 00:01:15.103 STDOUT terraform:  + availability_zone = "nova" 2025-05-23 00:01:15.103967 | orchestrator | 00:01:15.103 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.104001 | orchestrator | 00:01:15.103 STDOUT terraform:  + image_id = (known after apply) 2025-05-23 00:01:15.104033 | orchestrator | 00:01:15.103 STDOUT terraform:  + metadata = (known after apply) 2025-05-23 00:01:15.104075 | orchestrator | 00:01:15.104 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-05-23 00:01:15.104108 | orchestrator | 00:01:15.104 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.104178 | orchestrator | 00:01:15.104 STDOUT terraform:  + size = 80 2025-05-23 00:01:15.104189 | orchestrator | 00:01:15.104 STDOUT terraform:  + volume_type = "ssd" 2025-05-23 00:01:15.104195 | orchestrator | 00:01:15.104 STDOUT terraform:  } 2025-05-23 00:01:15.104205 | orchestrator | 00:01:15.104 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-05-23 00:01:15.104264 | orchestrator | 00:01:15.104 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-23 00:01:15.104282 | orchestrator | 00:01:15.104 STDOUT terraform:  + attachment = (known after apply) 2025-05-23 00:01:15.104309 | orchestrator | 00:01:15.104 STDOUT terraform:  + availability_zone = "nova" 2025-05-23 00:01:15.104342 | orchestrator | 00:01:15.104 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.104375 | orchestrator | 00:01:15.104 STDOUT terraform:  + image_id = (known after apply) 2025-05-23 00:01:15.104408 | orchestrator | 00:01:15.104 STDOUT terraform:  + metadata = (known after apply) 2025-05-23 00:01:15.104451 | orchestrator | 00:01:15.104 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-05-23 00:01:15.104485 | orchestrator | 00:01:15.104 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.104506 | orchestrator | 00:01:15.104 STDOUT terraform:  + size = 80 2025-05-23 00:01:15.104528 | orchestrator | 00:01:15.104 STDOUT terraform:  + volume_type = "ssd" 2025-05-23 00:01:15.111477 | orchestrator | 00:01:15.104 STDOUT terraform:  } 2025-05-23 00:01:15.111552 | orchestrator | 00:01:15.111 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-05-23 00:01:15.111642 | orchestrator | 00:01:15.111 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-23 00:01:15.111696 | orchestrator | 00:01:15.111 STDOUT terraform:  + attachment = (known after apply) 2025-05-23 00:01:15.111743 | orchestrator | 00:01:15.111 STDOUT terraform:  + availability_zone = "nova" 2025-05-23 00:01:15.111787 | orchestrator | 00:01:15.111 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.111842 | orchestrator | 00:01:15.111 STDOUT terraform:  + image_id = (known after apply) 2025-05-23 00:01:15.111894 | orchestrator | 00:01:15.111 STDOUT terraform:  + metadata = (known after apply) 2025-05-23 00:01:15.111996 | orchestrator | 00:01:15.111 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-05-23 00:01:15.112009 | orchestrator | 00:01:15.111 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.112046 | orchestrator | 00:01:15.111 STDOUT terraform:  + size = 80 2025-05-23 00:01:15.112075 | orchestrator | 00:01:15.112 STDOUT terraform:  + volume_type = "ssd" 2025-05-23 00:01:15.112084 | orchestrator | 00:01:15.112 STDOUT terraform:  } 2025-05-23 00:01:15.112203 | orchestrator | 00:01:15.112 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-05-23 00:01:15.112261 | orchestrator | 00:01:15.112 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-23 00:01:15.112314 | orchestrator | 00:01:15.112 STDOUT terraform:  + attachment = (known after apply) 2025-05-23 00:01:15.112341 | orchestrator | 00:01:15.112 STDOUT terraform:  + availability_zone = "nova" 2025-05-23 00:01:15.112396 | orchestrator | 00:01:15.112 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.112444 | orchestrator | 00:01:15.112 STDOUT terraform:  + metadata = (known after apply) 2025-05-23 00:01:15.112507 | orchestrator | 00:01:15.112 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-05-23 00:01:15.112578 | orchestrator | 00:01:15.112 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.112638 | orchestrator | 00:01:15.112 STDOUT terraform:  + size = 20 2025-05-23 00:01:15.112674 | orchestrator | 00:01:15.112 STDOUT terraform:  + volume_type = "ssd" 2025-05-23 00:01:15.112709 | orchestrator | 00:01:15.112 STDOUT terraform:  } 2025-05-23 00:01:15.112887 | orchestrator | 00:01:15.112 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-05-23 00:01:15.112953 | orchestrator | 00:01:15.112 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-23 00:01:15.113003 | orchestrator | 00:01:15.112 STDOUT terraform:  + attachment = (known after apply) 2025-05-23 00:01:15.113040 | orchestrator | 00:01:15.112 STDOUT terraform:  + availability_zone = "nova" 2025-05-23 00:01:15.113089 | orchestrator | 00:01:15.113 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.113221 | orchestrator | 00:01:15.113 STDOUT terraform:  + metadata = (known after apply) 2025-05-23 00:01:15.113301 | orchestrator | 00:01:15.113 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-05-23 00:01:15.113354 | orchestrator | 00:01:15.113 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.113388 | orchestrator | 00:01:15.113 STDOUT terraform:  + size = 20 2025-05-23 00:01:15.113431 | orchestrator | 00:01:15.113 STDOUT terraform:  + volume_type = "ssd" 2025-05-23 00:01:15.113440 | orchestrator | 00:01:15.113 STDOUT terraform:  } 2025-05-23 00:01:15.113518 | orchestrator | 00:01:15.113 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-05-23 00:01:15.113588 | orchestrator | 00:01:15.113 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-23 00:01:15.113641 | orchestrator | 00:01:15.113 STDOUT terraform:  + attachment = (known after apply) 2025-05-23 00:01:15.113674 | orchestrator | 00:01:15.113 STDOUT terraform:  + availability_zone = "nova" 2025-05-23 00:01:15.113750 | orchestrator | 00:01:15.113 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.113792 | orchestrator | 00:01:15.113 STDOUT terraform:  + metadata = (known after apply) 2025-05-23 00:01:15.113847 | orchestrator | 00:01:15.113 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-05-23 00:01:15.113890 | orchestrator | 00:01:15.113 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.113918 | orchestrator | 00:01:15.113 STDOUT terraform:  + size = 20 2025-05-23 00:01:15.113927 | orchestrator | 00:01:15.113 STDOUT terraform:  + volume_type = "ssd" 2025-05-23 00:01:15.113953 | orchestrator | 00:01:15.113 STDOUT terraform:  } 2025-05-23 00:01:15.114029 | orchestrator | 00:01:15.113 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-05-23 00:01:15.114098 | orchestrator | 00:01:15.114 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-23 00:01:15.114161 | orchestrator | 00:01:15.114 STDOUT terraform:  + attachment = (known after apply) 2025-05-23 00:01:15.114171 | orchestrator | 00:01:15.114 STDOUT terraform:  + availability_zone = "nova" 2025-05-23 00:01:15.114229 | orchestrator | 00:01:15.114 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.114271 | orchestrator | 00:01:15.114 STDOUT terraform:  + metadata = (known after apply) 2025-05-23 00:01:15.114325 | orchestrator | 00:01:15.114 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-05-23 00:01:15.114368 | orchestrator | 00:01:15.114 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.114395 | orchestrator | 00:01:15.114 STDOUT terraform:  + size = 20 2025-05-23 00:01:15.114431 | orchestrator | 00:01:15.114 STDOUT terraform:  + volume_type = "ssd" 2025-05-23 00:01:15.114438 | orchestrator | 00:01:15.114 STDOUT terraform:  } 2025-05-23 00:01:15.114501 | orchestrator | 00:01:15.114 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-05-23 00:01:15.114560 | orchestrator | 00:01:15.114 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-23 00:01:15.114601 | orchestrator | 00:01:15.114 STDOUT terraform:  + attachment = (known after apply) 2025-05-23 00:01:15.114628 | orchestrator | 00:01:15.114 STDOUT terraform:  + availability_zone = "nova" 2025-05-23 00:01:15.114668 | orchestrator | 00:01:15.114 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.114712 | orchestrator | 00:01:15.114 STDOUT terraform:  + metadata = (known after apply) 2025-05-23 00:01:15.114762 | orchestrator | 00:01:15.114 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-05-23 00:01:15.114806 | orchestrator | 00:01:15.114 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.114844 | orchestrator | 00:01:15.114 STDOUT terraform:  + size = 20 2025-05-23 00:01:15.114854 | orchestrator | 00:01:15.114 STDOUT terraform:  + volume_type = "ssd" 2025-05-23 00:01:15.114887 | orchestrator | 00:01:15.114 STDOUT terraform:  } 2025-05-23 00:01:15.114939 | orchestrator | 00:01:15.114 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-05-23 00:01:15.114999 | orchestrator | 00:01:15.114 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-23 00:01:15.115046 | orchestrator | 00:01:15.114 STDOUT terraform:  + attachment = (known after apply) 2025-05-23 00:01:15.115073 | orchestrator | 00:01:15.115 STDOUT terraform:  + availability_zone = "nova" 2025-05-23 00:01:15.115114 | orchestrator | 00:01:15.115 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.115167 | orchestrator | 00:01:15.115 STDOUT terraform:  + metadata = (known after apply) 2025-05-23 00:01:15.115223 | orchestrator | 00:01:15.115 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-05-23 00:01:15.115265 | orchestrator | 00:01:15.115 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.115293 | orchestrator | 00:01:15.115 STDOUT terraform:  + size = 20 2025-05-23 00:01:15.115321 | orchestrator | 00:01:15.115 STDOUT terraform:  + volume_type = "ssd" 2025-05-23 00:01:15.115330 | orchestrator | 00:01:15.115 STDOUT terraform:  } 2025-05-23 00:01:15.115395 | orchestrator | 00:01:15.115 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-05-23 00:01:15.115451 | orchestrator | 00:01:15.115 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-23 00:01:15.115496 | orchestrator | 00:01:15.115 STDOUT terraform:  + attachment = (known after apply) 2025-05-23 00:01:15.115530 | orchestrator | 00:01:15.115 STDOUT terraform:  + availability_zone = "nova" 2025-05-23 00:01:15.115570 | orchestrator | 00:01:15.115 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.115610 | orchestrator | 00:01:15.115 STDOUT terraform:  + metadata = (known after apply) 2025-05-23 00:01:15.115663 | orchestrator | 00:01:15.115 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-05-23 00:01:15.115702 | orchestrator | 00:01:15.115 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.115729 | orchestrator | 00:01:15.115 STDOUT terraform:  + size = 20 2025-05-23 00:01:15.115755 | orchestrator | 00:01:15.115 STDOUT terraform:  + volume_type = "ssd" 2025-05-23 00:01:15.115764 | orchestrator | 00:01:15.115 STDOUT terraform:  } 2025-05-23 00:01:15.115828 | orchestrator | 00:01:15.115 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-05-23 00:01:15.115885 | orchestrator | 00:01:15.115 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-23 00:01:15.115928 | orchestrator | 00:01:15.115 STDOUT terraform:  + attachment = (known after apply) 2025-05-23 00:01:15.115961 | orchestrator | 00:01:15.115 STDOUT terraform:  + availability_zone = "nova" 2025-05-23 00:01:15.116001 | orchestrator | 00:01:15.115 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.116041 | orchestrator | 00:01:15.115 STDOUT terraform:  + metadata = (known after apply) 2025-05-23 00:01:15.116092 | orchestrator | 00:01:15.116 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-05-23 00:01:15.116150 | orchestrator | 00:01:15.116 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.116165 | orchestrator | 00:01:15.116 STDOUT terraform:  + size = 20 2025-05-23 00:01:15.116206 | orchestrator | 00:01:15.116 STDOUT terraform:  + volume_type = "ssd" 2025-05-23 00:01:15.116216 | orchestrator | 00:01:15.116 STDOUT terraform:  } 2025-05-23 00:01:15.116278 | orchestrator | 00:01:15.116 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-05-23 00:01:15.116342 | orchestrator | 00:01:15.116 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-23 00:01:15.116380 | orchestrator | 00:01:15.116 STDOUT terraform:  + attachment = (known after apply) 2025-05-23 00:01:15.116419 | orchestrator | 00:01:15.116 STDOUT terraform:  + availability_zone = "nova" 2025-05-23 00:01:15.116448 | orchestrator | 00:01:15.116 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.116492 | orchestrator | 00:01:15.116 STDOUT terraform:  + metadata = (known after apply) 2025-05-23 00:01:15.116541 | orchestrator | 00:01:15.116 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-05-23 00:01:15.116588 | orchestrator | 00:01:15.116 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.116598 | orchestrator | 00:01:15.116 STDOUT terraform:  + size = 20 2025-05-23 00:01:15.116639 | orchestrator | 00:01:15.116 STDOUT terraform:  + volume_type = "ssd" 2025-05-23 00:01:15.116655 | orchestrator | 00:01:15.116 STDOUT terraform:  } 2025-05-23 00:01:15.116711 | orchestrator | 00:01:15.116 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-05-23 00:01:15.116784 | orchestrator | 00:01:15.116 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-05-23 00:01:15.116830 | orchestrator | 00:01:15.116 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-23 00:01:15.116879 | orchestrator | 00:01:15.116 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-23 00:01:15.116925 | orchestrator | 00:01:15.116 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-23 00:01:15.116974 | orchestrator | 00:01:15.116 STDOUT terraform:  + all_tags = (known after apply) 2025-05-23 00:01:15.117005 | orchestrator | 00:01:15.116 STDOUT terraform:  + availability_zone = "nova" 2025-05-23 00:01:15.117043 | orchestrator | 00:01:15.116 STDOUT terraform:  + config_drive = true 2025-05-23 00:01:15.117082 | orchestrator | 00:01:15.117 STDOUT terraform:  + created = (known after apply) 2025-05-23 00:01:15.117147 | orchestrator | 00:01:15.117 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-23 00:01:15.117199 | orchestrator | 00:01:15.117 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-05-23 00:01:15.117236 | orchestrator | 00:01:15.117 STDOUT terraform:  + force_delete = false 2025-05-23 00:01:15.117284 | orchestrator | 00:01:15.117 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.117334 | orchestrator | 00:01:15.117 STDOUT terraform:  + image_id = (known after apply) 2025-05-23 00:01:15.117379 | orchestrator | 00:01:15.117 STDOUT terraform:  + image_name = (known after apply) 2025-05-23 00:01:15.117417 | orchestrator | 00:01:15.117 STDOUT terraform:  + key_pair = "testbed" 2025-05-23 00:01:15.117458 | orchestrator | 00:01:15.117 STDOUT terraform:  + name = "testbed-manager" 2025-05-23 00:01:15.117495 | orchestrator | 00:01:15.117 STDOUT terraform:  + power_state = "active" 2025-05-23 00:01:15.117540 | orchestrator | 00:01:15.117 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.117588 | orchestrator | 00:01:15.117 STDOUT terraform:  + security_groups = (known after apply) 2025-05-23 00:01:15.117625 | orchestrator | 00:01:15.117 STDOUT terraform:  + stop_before_destroy = false 2025-05-23 00:01:15.117669 | orchestrator | 00:01:15.117 STDOUT terraform:  + updated = (known after apply) 2025-05-23 00:01:15.117714 | orchestrator | 00:01:15.117 STDOUT terraform:  + user_data = (known after apply) 2025-05-23 00:01:15.117725 | orchestrator | 00:01:15.117 STDOUT terraform:  + block_device { 2025-05-23 00:01:15.117769 | orchestrator | 00:01:15.117 STDOUT terraform:  + boot_index = 0 2025-05-23 00:01:15.117809 | orchestrator | 00:01:15.117 STDOUT terraform:  + delete_on_termination = false 2025-05-23 00:01:15.117848 | orchestrator | 00:01:15.117 STDOUT terraform:  + destination_type = "volume" 2025-05-23 00:01:15.117888 | orchestrator | 00:01:15.117 STDOUT terraform:  + multiattach = false 2025-05-23 00:01:15.117930 | orchestrator | 00:01:15.117 STDOUT terraform:  + source_type = "volume" 2025-05-23 00:01:15.117982 | orchestrator | 00:01:15.117 STDOUT terraform:  + uuid = (known after apply) 2025-05-23 00:01:15.117993 | orchestrator | 00:01:15.117 STDOUT terraform:  } 2025-05-23 00:01:15.118006 | orchestrator | 00:01:15.117 STDOUT terraform:  + network { 2025-05-23 00:01:15.118062 | orchestrator | 00:01:15.118 STDOUT terraform:  + access_network = false 2025-05-23 00:01:15.118106 | orchestrator | 00:01:15.118 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-23 00:01:15.118181 | orchestrator | 00:01:15.118 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-23 00:01:15.118197 | orchestrator | 00:01:15.118 STDOUT terraform:  + mac = (known after apply) 2025-05-23 00:01:15.118244 | orchestrator | 00:01:15.118 STDOUT terraform:  + name = (known after apply) 2025-05-23 00:01:15.118294 | orchestrator | 00:01:15.118 STDOUT terraform:  + port = (known after apply) 2025-05-23 00:01:15.118331 | orchestrator | 00:01:15.118 STDOUT terraform:  + uuid = (known after apply) 2025-05-23 00:01:15.118338 | orchestrator | 00:01:15.118 STDOUT terraform:  } 2025-05-23 00:01:15.118347 | orchestrator | 00:01:15.118 STDOUT terraform:  } 2025-05-23 00:01:15.118411 | orchestrator | 00:01:15.118 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-05-23 00:01:15.118468 | orchestrator | 00:01:15.118 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-23 00:01:15.118515 | orchestrator | 00:01:15.118 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-23 00:01:15.118563 | orchestrator | 00:01:15.118 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-23 00:01:15.118609 | orchestrator | 00:01:15.118 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-23 00:01:15.118659 | orchestrator | 00:01:15.118 STDOUT terraform:  + all_tags = (known after apply) 2025-05-23 00:01:15.118688 | orchestrator | 00:01:15.118 STDOUT terraform:  + availability_zone = "nova" 2025-05-23 00:01:15.118717 | orchestrator | 00:01:15.118 STDOUT terraform:  + config_drive = true 2025-05-23 00:01:15.118767 | orchestrator | 00:01:15.118 STDOUT terraform:  + created = (known after apply) 2025-05-23 00:01:15.118812 | orchestrator | 00:01:15.118 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-23 00:01:15.118854 | orchestrator | 00:01:15.118 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-23 00:01:15.118882 | orchestrator | 00:01:15.118 STDOUT terraform:  + force_delete = false 2025-05-23 00:01:15.118949 | orchestrator | 00:01:15.118 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.118995 | orchestrator | 00:01:15.118 STDOUT terraform:  + image_id = (known after apply) 2025-05-23 00:01:15.119045 | orchestrator | 00:01:15.118 STDOUT terraform:  + image_name = (known after apply) 2025-05-23 00:01:15.119073 | orchestrator | 00:01:15.119 STDOUT terraform:  + key_pair = "testbed" 2025-05-23 00:01:15.119118 | orchestrator | 00:01:15.119 STDOUT terraform:  + name = "testbed-node-0" 2025-05-23 00:01:15.119176 | orchestrator | 00:01:15.119 STDOUT terraform:  + power_state = "active" 2025-05-23 00:01:15.119228 | orchestrator | 00:01:15.119 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.119275 | orchestrator | 00:01:15.119 STDOUT terraform:  + security_groups = (known after apply) 2025-05-23 00:01:15.119309 | orchestrator | 00:01:15.119 STDOUT terraform:  + stop_before_destroy = false 2025-05-23 00:01:15.119356 | orchestrator | 00:01:15.119 STDOUT terraform:  + updated = (known after apply) 2025-05-23 00:01:15.119425 | orchestrator | 00:01:15.119 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-23 00:01:15.119435 | orchestrator | 00:01:15.119 STDOUT terraform:  + block_device { 2025-05-23 00:01:15.119477 | orchestrator | 00:01:15.119 STDOUT terraform:  + boot_index = 0 2025-05-23 00:01:15.119513 | orchestrator | 00:01:15.119 STDOUT terraform:  + delete_on_termination = false 2025-05-23 00:01:15.119556 | orchestrator | 00:01:15.119 STDOUT terraform:  + destination_type = "volume" 2025-05-23 00:01:15.119593 | orchestrator | 00:01:15.119 STDOUT terraform:  + multiattach = false 2025-05-23 00:01:15.119638 | orchestrator | 00:01:15.119 STDOUT terraform:  + source_type = "volume" 2025-05-23 00:01:15.119688 | orchestrator | 00:01:15.119 STDOUT terraform:  + uuid = (known after apply) 2025-05-23 00:01:15.119698 | orchestrator | 00:01:15.119 STDOUT terraform:  } 2025-05-23 00:01:15.119732 | orchestrator | 00:01:15.119 STDOUT terraform:  + network { 2025-05-23 00:01:15.119742 | orchestrator | 00:01:15.119 STDOUT terraform:  + access_network = false 2025-05-23 00:01:15.119793 | orchestrator | 00:01:15.119 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-23 00:01:15.119833 | orchestrator | 00:01:15.119 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-23 00:01:15.119877 | orchestrator | 00:01:15.119 STDOUT terraform:  + mac = (known after apply) 2025-05-23 00:01:15.119924 | orchestrator | 00:01:15.119 STDOUT terraform:  + name = (known after apply) 2025-05-23 00:01:15.119963 | orchestrator | 00:01:15.119 STDOUT terraform:  + port = (known after apply) 2025-05-23 00:01:15.120006 | orchestrator | 00:01:15.119 STDOUT terraform:  + uuid = (known after apply) 2025-05-23 00:01:15.120016 | orchestrator | 00:01:15.119 STDOUT terraform:  } 2025-05-23 00:01:15.120042 | orchestrator | 00:01:15.120 STDOUT terraform:  } 2025-05-23 00:01:15.120100 | orchestrator | 00:01:15.120 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-05-23 00:01:15.120169 | orchestrator | 00:01:15.120 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-23 00:01:15.120214 | orchestrator | 00:01:15.120 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-23 00:01:15.120260 | orchestrator | 00:01:15.120 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-23 00:01:15.120308 | orchestrator | 00:01:15.120 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-23 00:01:15.120354 | orchestrator | 00:01:15.120 STDOUT terraform:  + all_tags = (known after apply) 2025-05-23 00:01:15.120383 | orchestrator | 00:01:15.120 STDOUT terraform:  + availability_zone = "nova" 2025-05-23 00:01:15.120398 | orchestrator | 00:01:15.120 STDOUT terraform:  + config_drive = true 2025-05-23 00:01:15.120457 | orchestrator | 00:01:15.120 STDOUT terraform:  + created = (known after apply) 2025-05-23 00:01:15.120503 | orchestrator | 00:01:15.120 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-23 00:01:15.120544 | orchestrator | 00:01:15.120 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-23 00:01:15.120572 | orchestrator | 00:01:15.120 STDOUT terraform:  + force_delete = false 2025-05-23 00:01:15.120620 | orchestrator | 00:01:15.120 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.120667 | orchestrator | 00:01:15.120 STDOUT terraform:  + image_id = (known after apply) 2025-05-23 00:01:15.120716 | orchestrator | 00:01:15.120 STDOUT terraform:  + image_name = (known after apply) 2025-05-23 00:01:15.120743 | orchestrator | 00:01:15.120 STDOUT terraform:  + key_pair = "testbed" 2025-05-23 00:01:15.120790 | orchestrator | 00:01:15.120 STDOUT terraform:  + name = "testbed-node-1" 2025-05-23 00:01:15.120817 | orchestrator | 00:01:15.120 STDOUT terraform:  + power_state = "active" 2025-05-23 00:01:15.120863 | orchestrator | 00:01:15.120 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.120912 | orchestrator | 00:01:15.120 STDOUT terraform:  + security_groups = (known after apply) 2025-05-23 00:01:15.120939 | orchestrator | 00:01:15.120 STDOUT terraform:  + stop_before_destroy = false 2025-05-23 00:01:15.120987 | orchestrator | 00:01:15.120 STDOUT terraform:  + updated = (known after apply) 2025-05-23 00:01:15.121051 | orchestrator | 00:01:15.120 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-23 00:01:15.121061 | orchestrator | 00:01:15.121 STDOUT terraform:  + block_device { 2025-05-23 00:01:15.121105 | orchestrator | 00:01:15.121 STDOUT terraform:  + boot_index = 0 2025-05-23 00:01:15.121158 | orchestrator | 00:01:15.121 STDOUT terraform:  + delete_on_termination = false 2025-05-23 00:01:15.121195 | orchestrator | 00:01:15.121 STDOUT terraform:  + destination_type = "volume" 2025-05-23 00:01:15.121283 | orchestrator | 00:01:15.121 STDOUT terraform:  + multiattach = false 2025-05-23 00:01:15.122862 | orchestrator | 00:01:15.121 STDOUT terraform:  + source_type = "volume" 2025-05-23 00:01:15.122917 | orchestrator | 00:01:15.121 STDOUT terraform:  + uuid = (known after apply) 2025-05-23 00:01:15.122924 | orchestrator | 00:01:15.121 STDOUT terraform:  } 2025-05-23 00:01:15.122929 | orchestrator | 00:01:15.121 STDOUT terraform:  + network { 2025-05-23 00:01:15.122933 | orchestrator | 00:01:15.121 STDOUT terraform:  + access_network = false 2025-05-23 00:01:15.122938 | orchestrator | 00:01:15.121 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-23 00:01:15.122942 | orchestrator | 00:01:15.121 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-23 00:01:15.122946 | orchestrator | 00:01:15.121 STDOUT terraform:  + mac = (known after apply) 2025-05-23 00:01:15.122950 | orchestrator | 00:01:15.121 STDOUT terraform:  + name = (known after apply) 2025-05-23 00:01:15.122964 | orchestrator | 00:01:15.121 STDOUT terraform:  + port = (known after apply) 2025-05-23 00:01:15.122968 | orchestrator | 00:01:15.122 STDOUT terraform:  + uuid = (known after apply) 2025-05-23 00:01:15.122972 | orchestrator | 00:01:15.122 STDOUT terraform:  } 2025-05-23 00:01:15.122976 | orchestrator | 00:01:15.122 STDOUT terraform:  } 2025-05-23 00:01:15.122980 | orchestrator | 00:01:15.122 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-05-23 00:01:15.122984 | orchestrator | 00:01:15.122 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-23 00:01:15.122988 | orchestrator | 00:01:15.122 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-23 00:01:15.122992 | orchestrator | 00:01:15.122 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-23 00:01:15.122995 | orchestrator | 00:01:15.122 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-23 00:01:15.122999 | orchestrator | 00:01:15.122 STDOUT terraform:  + all_tags = (known after apply) 2025-05-23 00:01:15.123008 | orchestrator | 00:01:15.122 STDOUT terraform:  + availability_zone = "nova" 2025-05-23 00:01:15.123012 | orchestrator | 00:01:15.122 STDOUT terraform:  + config_drive = true 2025-05-23 00:01:15.123017 | orchestrator | 00:01:15.122 STDOUT terraform:  + created = (known after apply) 2025-05-23 00:01:15.123020 | orchestrator | 00:01:15.122 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-23 00:01:15.123027 | orchestrator | 00:01:15.122 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-23 00:01:15.123031 | orchestrator | 00:01:15.122 STDOUT terraform:  + force_delete = false 2025-05-23 00:01:15.123041 | orchestrator | 00:01:15.122 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.123045 | orchestrator | 00:01:15.122 STDOUT terraform:  + image_id = (known after apply) 2025-05-23 00:01:15.123049 | orchestrator | 00:01:15.122 STDOUT terraform:  + image_name = (known after apply) 2025-05-23 00:01:15.123053 | orchestrator | 00:01:15.122 STDOUT terraform:  + key_pair = "testbed" 2025-05-23 00:01:15.123057 | orchestrator | 00:01:15.122 STDOUT terraform:  + name = "testbed-node-2" 2025-05-23 00:01:15.123061 | orchestrator | 00:01:15.122 STDOUT terraform:  + power_state = "active" 2025-05-23 00:01:15.123080 | orchestrator | 00:01:15.123 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.123145 | orchestrator | 00:01:15.123 STDOUT terraform:  + security_groups = (known after apply) 2025-05-23 00:01:15.123179 | orchestrator | 00:01:15.123 STDOUT terraform:  + stop_before_destroy = false 2025-05-23 00:01:15.123229 | orchestrator | 00:01:15.123 STDOUT terraform:  + updated = (known after apply) 2025-05-23 00:01:15.123299 | orchestrator | 00:01:15.123 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-23 00:01:15.123324 | orchestrator | 00:01:15.123 STDOUT terraform:  + block_device { 2025-05-23 00:01:15.123361 | orchestrator | 00:01:15.123 STDOUT terraform:  + boot_index = 0 2025-05-23 00:01:15.123402 | orchestrator | 00:01:15.123 STDOUT terraform:  + delete_on_termination = false 2025-05-23 00:01:15.123443 | orchestrator | 00:01:15.123 STDOUT terraform:  + destination_type = "volume" 2025-05-23 00:01:15.123483 | orchestrator | 00:01:15.123 STDOUT terraform:  + multiattach = false 2025-05-23 00:01:15.123525 | orchestrator | 00:01:15.123 STDOUT terraform:  + source_type = "volume" 2025-05-23 00:01:15.123580 | orchestrator | 00:01:15.123 STDOUT terraform:  + uuid = (known after apply) 2025-05-23 00:01:15.123596 | orchestrator | 00:01:15.123 STDOUT terraform:  } 2025-05-23 00:01:15.123614 | orchestrator | 00:01:15.123 STDOUT terraform:  + network { 2025-05-23 00:01:15.123641 | orchestrator | 00:01:15.123 STDOUT terraform:  + access_network = false 2025-05-23 00:01:15.123685 | orchestrator | 00:01:15.123 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-23 00:01:15.123728 | orchestrator | 00:01:15.123 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-23 00:01:15.123774 | orchestrator | 00:01:15.123 STDOUT terraform:  + mac = (known after apply) 2025-05-23 00:01:15.123819 | orchestrator | 00:01:15.123 STDOUT terraform:  + name = (known after apply) 2025-05-23 00:01:15.123864 | orchestrator | 00:01:15.123 STDOUT terraform:  + port = (known after apply) 2025-05-23 00:01:15.123907 | orchestrator | 00:01:15.123 STDOUT terraform:  + uuid = (known after apply) 2025-05-23 00:01:15.123925 | orchestrator | 00:01:15.123 STDOUT terraform:  } 2025-05-23 00:01:15.123932 | orchestrator | 00:01:15.123 STDOUT terraform:  } 2025-05-23 00:01:15.123999 | orchestrator | 00:01:15.123 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-05-23 00:01:15.124058 | orchestrator | 00:01:15.123 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-23 00:01:15.124109 | orchestrator | 00:01:15.124 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-23 00:01:15.124258 | orchestrator | 00:01:15.124 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-23 00:01:15.124297 | orchestrator | 00:01:15.124 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-23 00:01:15.124358 | orchestrator | 00:01:15.124 STDOUT terraform:  + all_tags = (known after apply) 2025-05-23 00:01:15.124401 | orchestrator | 00:01:15.124 STDOUT terraform:  + availability_zone = "nova" 2025-05-23 00:01:15.124432 | orchestrator | 00:01:15.124 STDOUT terraform:  + config_drive = true 2025-05-23 00:01:15.124483 | orchestrator | 00:01:15.124 STDOUT terraform:  + created = (known after apply) 2025-05-23 00:01:15.124533 | orchestrator | 00:01:15.124 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-23 00:01:15.124575 | orchestrator | 00:01:15.124 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-23 00:01:15.124610 | orchestrator | 00:01:15.124 STDOUT terraform:  + force_delete = false 2025-05-23 00:01:15.124663 | orchestrator | 00:01:15.124 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.124712 | orchestrator | 00:01:15.124 STDOUT terraform:  + image_id = (known after apply) 2025-05-23 00:01:15.124756 | orchestrator | 00:01:15.124 STDOUT terraform:  + image_name = (known after apply) 2025-05-23 00:01:15.124789 | orchestrator | 00:01:15.124 STDOUT terraform:  + key_pair = "testbed" 2025-05-23 00:01:15.124829 | orchestrator | 00:01:15.124 STDOUT terraform:  + name = "testbed-node-3" 2025-05-23 00:01:15.124860 | orchestrator | 00:01:15.124 STDOUT terraform:  + power_state = "active" 2025-05-23 00:01:15.124905 | orchestrator | 00:01:15.124 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.124950 | orchestrator | 00:01:15.124 STDOUT terraform:  + security_groups = (known after apply) 2025-05-23 00:01:15.125001 | orchestrator | 00:01:15.124 STDOUT terraform:  + stop_before_destroy = false 2025-05-23 00:01:15.125026 | orchestrator | 00:01:15.124 STDOUT terraform:  + updated = (known after apply) 2025-05-23 00:01:15.125084 | orchestrator | 00:01:15.125 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-23 00:01:15.125105 | orchestrator | 00:01:15.125 STDOUT terraform:  + block_device { 2025-05-23 00:01:15.125141 | orchestrator | 00:01:15.125 STDOUT terraform:  + boot_index = 0 2025-05-23 00:01:15.125186 | orchestrator | 00:01:15.125 STDOUT terraform:  + delete_on_termination = false 2025-05-23 00:01:15.125228 | orchestrator | 00:01:15.125 STDOUT terraform:  + destination_type = "volume" 2025-05-23 00:01:15.125266 | orchestrator | 00:01:15.125 STDOUT terraform:  + multiattach = false 2025-05-23 00:01:15.125306 | orchestrator | 00:01:15.125 STDOUT terraform:  + source_type = "volume" 2025-05-23 00:01:15.125361 | orchestrator | 00:01:15.125 STDOUT terraform:  + uuid = (known after apply) 2025-05-23 00:01:15.125378 | orchestrator | 00:01:15.125 STDOUT terraform:  } 2025-05-23 00:01:15.125394 | orchestrator | 00:01:15.125 STDOUT terraform:  + network { 2025-05-23 00:01:15.125423 | orchestrator | 00:01:15.125 STDOUT terraform:  + access_network = false 2025-05-23 00:01:15.125466 | orchestrator | 00:01:15.125 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-23 00:01:15.125508 | orchestrator | 00:01:15.125 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-23 00:01:15.125548 | orchestrator | 00:01:15.125 STDOUT terraform:  + mac = (known after apply) 2025-05-23 00:01:15.125589 | orchestrator | 00:01:15.125 STDOUT terraform:  + name = (known after apply) 2025-05-23 00:01:15.125629 | orchestrator | 00:01:15.125 STDOUT terraform:  + port = (known after apply) 2025-05-23 00:01:15.125672 | orchestrator | 00:01:15.125 STDOUT terraform:  + uuid = (known after apply) 2025-05-23 00:01:15.125678 | orchestrator | 00:01:15.125 STDOUT terraform:  } 2025-05-23 00:01:15.125702 | orchestrator | 00:01:15.125 STDOUT terraform:  } 2025-05-23 00:01:15.125758 | orchestrator | 00:01:15.125 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-05-23 00:01:15.125811 | orchestrator | 00:01:15.125 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-23 00:01:15.125855 | orchestrator | 00:01:15.125 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-23 00:01:15.125900 | orchestrator | 00:01:15.125 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-23 00:01:15.125949 | orchestrator | 00:01:15.125 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-23 00:01:15.125991 | orchestrator | 00:01:15.125 STDOUT terraform:  + all_tags = (known after apply) 2025-05-23 00:01:15.126056 | orchestrator | 00:01:15.125 STDOUT terraform:  + availability_zone = "nova" 2025-05-23 00:01:15.126083 | orchestrator | 00:01:15.126 STDOUT terraform:  + config_drive = true 2025-05-23 00:01:15.126137 | orchestrator | 00:01:15.126 STDOUT terraform:  + created = (known after apply) 2025-05-23 00:01:15.126187 | orchestrator | 00:01:15.126 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-23 00:01:15.126224 | orchestrator | 00:01:15.126 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-23 00:01:15.126260 | orchestrator | 00:01:15.126 STDOUT terraform:  + force_delete = false 2025-05-23 00:01:15.126306 | orchestrator | 00:01:15.126 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.126351 | orchestrator | 00:01:15.126 STDOUT terraform:  + image_id = (known after apply) 2025-05-23 00:01:15.126397 | orchestrator | 00:01:15.126 STDOUT terraform:  + image_name = (known after apply) 2025-05-23 00:01:15.126430 | orchestrator | 00:01:15.126 STDOUT terraform:  + key_pair = "testbed" 2025-05-23 00:01:15.126470 | orchestrator | 00:01:15.126 STDOUT terraform:  + name = "testbed-node-4" 2025-05-23 00:01:15.126504 | orchestrator | 00:01:15.126 STDOUT terraform:  + power_state = "active" 2025-05-23 00:01:15.126549 | orchestrator | 00:01:15.126 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.126592 | orchestrator | 00:01:15.126 STDOUT terraform:  + security_groups = (known after apply) 2025-05-23 00:01:15.126622 | orchestrator | 00:01:15.126 STDOUT terraform:  + stop_before_destroy = false 2025-05-23 00:01:15.126668 | orchestrator | 00:01:15.126 STDOUT terraform:  + updated = (known after apply) 2025-05-23 00:01:15.126731 | orchestrator | 00:01:15.126 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-23 00:01:15.126749 | orchestrator | 00:01:15.126 STDOUT terraform:  + block_device { 2025-05-23 00:01:15.126780 | orchestrator | 00:01:15.126 STDOUT terraform:  + boot_index = 0 2025-05-23 00:01:15.126816 | orchestrator | 00:01:15.126 STDOUT terraform:  + delete_on_termination = false 2025-05-23 00:01:15.126854 | orchestrator | 00:01:15.126 STDOUT terraform:  + destination_type = "volume" 2025-05-23 00:01:15.126890 | orchestrator | 00:01:15.126 STDOUT terraform:  + multiattach = false 2025-05-23 00:01:15.126927 | orchestrator | 00:01:15.126 STDOUT terraform:  + source_type = "volume" 2025-05-23 00:01:15.126978 | orchestrator | 00:01:15.126 STDOUT terraform:  + uuid = (known after apply) 2025-05-23 00:01:15.126984 | orchestrator | 00:01:15.126 STDOUT terraform:  } 2025-05-23 00:01:15.127009 | orchestrator | 00:01:15.126 STDOUT terraform:  + network { 2025-05-23 00:01:15.127034 | orchestrator | 00:01:15.127 STDOUT terraform:  + access_network = false 2025-05-23 00:01:15.127073 | orchestrator | 00:01:15.127 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-23 00:01:15.127114 | orchestrator | 00:01:15.127 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-23 00:01:15.127304 | orchestrator | 00:01:15.127 STDOUT terraform:  + mac = (known after apply) 2025-05-23 00:01:15.127400 | orchestrator | 00:01:15.127 STDOUT terraform:  + name = (known after apply) 2025-05-23 00:01:15.127417 | orchestrator | 00:01:15.127 STDOUT terraform:  + port = (known after apply) 2025-05-23 00:01:15.127439 | orchestrator | 00:01:15.127 STDOUT terraform:  + uuid = (known after apply) 2025-05-23 00:01:15.127451 | orchestrator | 00:01:15.127 STDOUT terraform:  } 2025-05-23 00:01:15.127463 | orchestrator | 00:01:15.127 STDOUT terraform:  } 2025-05-23 00:01:15.127475 | orchestrator | 00:01:15.127 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-05-23 00:01:15.127486 | orchestrator | 00:01:15.127 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-23 00:01:15.127497 | orchestrator | 00:01:15.127 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-23 00:01:15.127512 | orchestrator | 00:01:15.127 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-23 00:01:15.127524 | orchestrator | 00:01:15.127 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-23 00:01:15.127574 | orchestrator | 00:01:15.127 STDOUT terraform:  + all_tags = (known after apply) 2025-05-23 00:01:15.127594 | orchestrator | 00:01:15.127 STDOUT terraform:  + availability_zone = "nova" 2025-05-23 00:01:15.127634 | orchestrator | 00:01:15.127 STDOUT terraform:  + config_drive = true 2025-05-23 00:01:15.127650 | orchestrator | 00:01:15.127 STDOUT terraform:  + created = (known after apply) 2025-05-23 00:01:15.127706 | orchestrator | 00:01:15.127 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-23 00:01:15.127723 | orchestrator | 00:01:15.127 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-23 00:01:15.127761 | orchestrator | 00:01:15.127 STDOUT terraform:  + force_delete = false 2025-05-23 00:01:15.127799 | orchestrator | 00:01:15.127 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.127847 | orchestrator | 00:01:15.127 STDOUT terraform:  + image_id = (known after apply) 2025-05-23 00:01:15.127886 | orchestrator | 00:01:15.127 STDOUT terraform:  + image_name = (known after apply) 2025-05-23 00:01:15.127902 | orchestrator | 00:01:15.127 STDOUT terraform:  + key_pair = "testbed" 2025-05-23 00:01:15.127962 | orchestrator | 00:01:15.127 STDOUT terraform:  + name = "testbed-node-5" 2025-05-23 00:01:15.128047 | orchestrator | 00:01:15.127 STDOUT terraform:  + power_state = "active" 2025-05-23 00:01:15.128061 | orchestrator | 00:01:15.127 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.128076 | orchestrator | 00:01:15.128 STDOUT terraform:  + security_groups = (known after apply) 2025-05-23 00:01:15.128090 | orchestrator | 00:01:15.128 STDOUT terraform:  + stop_before_destroy = false 2025-05-23 00:01:15.128163 | orchestrator | 00:01:15.128 STDOUT terraform:  + updated = (known after apply) 2025-05-23 00:01:15.128285 | orchestrator | 00:01:15.128 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-23 00:01:15.128322 | orchestrator | 00:01:15.128 STDOUT terraform:  + block_device { 2025-05-23 00:01:15.128337 | orchestrator | 00:01:15.128 STDOUT terraform:  + boot_index = 0 2025-05-23 00:01:15.128388 | orchestrator | 00:01:15.128 STDOUT terraform:  + delete_on_termination = false 2025-05-23 00:01:15.128405 | orchestrator | 00:01:15.128 STDOUT terraform:  + destination_type = "volume" 2025-05-23 00:01:15.128442 | orchestrator | 00:01:15.128 STDOUT terraform:  + multiattach = false 2025-05-23 00:01:15.128472 | orchestrator | 00:01:15.128 STDOUT terraform:  + source_type = "volume" 2025-05-23 00:01:15.128532 | orchestrator | 00:01:15.128 STDOUT terraform:  + uuid = (known after apply) 2025-05-23 00:01:15.128546 | orchestrator | 00:01:15.128 STDOUT terraform:  } 2025-05-23 00:01:15.128560 | orchestrator | 00:01:15.128 STDOUT terraform:  + network { 2025-05-23 00:01:15.128575 | orchestrator | 00:01:15.128 STDOUT terraform:  + access_network = false 2025-05-23 00:01:15.128622 | orchestrator | 00:01:15.128 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-23 00:01:15.128648 | orchestrator | 00:01:15.128 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-23 00:01:15.128698 | orchestrator | 00:01:15.128 STDOUT terraform:  + mac = (known after apply) 2025-05-23 00:01:15.128714 | orchestrator | 00:01:15.128 STDOUT terraform:  + name = (known after apply) 2025-05-23 00:01:15.128769 | orchestrator | 00:01:15.128 STDOUT terraform:  + port = (known after apply) 2025-05-23 00:01:15.128786 | orchestrator | 00:01:15.128 STDOUT terraform:  + uuid = (known after apply) 2025-05-23 00:01:15.128801 | orchestrator | 00:01:15.128 STDOUT terraform:  } 2025-05-23 00:01:15.128822 | orchestrator | 00:01:15.128 STDOUT terraform:  } 2025-05-23 00:01:15.128861 | orchestrator | 00:01:15.128 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-05-23 00:01:15.128910 | orchestrator | 00:01:15.128 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-05-23 00:01:15.128926 | orchestrator | 00:01:15.128 STDOUT terraform:  + fingerprint = (known after apply) 2025-05-23 00:01:15.128978 | orchestrator | 00:01:15.128 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.128994 | orchestrator | 00:01:15.128 STDOUT terraform:  + name = "testbed" 2025-05-23 00:01:15.129009 | orchestrator | 00:01:15.128 STDOUT terraform:  + private_key = (sensitive value) 2025-05-23 00:01:15.129059 | orchestrator | 00:01:15.129 STDOUT terraform:  + public_key = (known after apply) 2025-05-23 00:01:15.129076 | orchestrator | 00:01:15.129 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.129142 | orchestrator | 00:01:15.129 STDOUT terraform:  + user_id = (known after apply) 2025-05-23 00:01:15.129155 | orchestrator | 00:01:15.129 STDOUT terraform:  } 2025-05-23 00:01:15.129213 | orchestrator | 00:01:15.129 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-05-23 00:01:15.129273 | orchestrator | 00:01:15.129 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-23 00:01:15.129297 | orchestrator | 00:01:15.129 STDOUT terraform:  + device = (known after apply) 2025-05-23 00:01:15.129336 | orchestrator | 00:01:15.129 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.129352 | orchestrator | 00:01:15.129 STDOUT terraform:  + instance_id = (known after apply) 2025-05-23 00:01:15.129390 | orchestrator | 00:01:15.129 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.129429 | orchestrator | 00:01:15.129 STDOUT terraform:  + volume_id = (known after apply) 2025-05-23 00:01:15.129441 | orchestrator | 00:01:15.129 STDOUT terraform:  } 2025-05-23 00:01:15.129500 | orchestrator | 00:01:15.129 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-05-23 00:01:15.129560 | orchestrator | 00:01:15.129 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-23 00:01:15.129577 | orchestrator | 00:01:15.129 STDOUT terraform:  + device = (known after apply) 2025-05-23 00:01:15.129627 | orchestrator | 00:01:15.129 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.129643 | orchestrator | 00:01:15.129 STDOUT terraform:  + instance_id = (known after apply) 2025-05-23 00:01:15.129690 | orchestrator | 00:01:15.129 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.129707 | orchestrator | 00:01:15.129 STDOUT terraform:  + volume_id = (known after apply) 2025-05-23 00:01:15.129721 | orchestrator | 00:01:15.129 STDOUT terraform:  } 2025-05-23 00:01:15.129794 | orchestrator | 00:01:15.129 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-05-23 00:01:15.129852 | orchestrator | 00:01:15.129 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-23 00:01:15.129868 | orchestrator | 00:01:15.129 STDOUT terraform:  + device = (known after apply) 2025-05-23 00:01:15.129918 | orchestrator | 00:01:15.129 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.129934 | orchestrator | 00:01:15.129 STDOUT terraform:  + instance_id = (known after apply) 2025-05-23 00:01:15.129984 | orchestrator | 00:01:15.129 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.130001 | orchestrator | 00:01:15.129 STDOUT terraform:  + volume_id = (known after apply) 2025-05-23 00:01:15.130043 | orchestrator | 00:01:15.129 STDOUT terraform:  } 2025-05-23 00:01:15.130104 | orchestrator | 00:01:15.130 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-05-23 00:01:15.130172 | orchestrator | 00:01:15.130 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-23 00:01:15.130190 | orchestrator | 00:01:15.130 STDOUT terraform:  + device = (known after apply) 2025-05-23 00:01:15.130237 | orchestrator | 00:01:15.130 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.130254 | orchestrator | 00:01:15.130 STDOUT terraform:  + instance_id = (known after apply) 2025-05-23 00:01:15.130304 | orchestrator | 00:01:15.130 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.130328 | orchestrator | 00:01:15.130 STDOUT terraform:  + volume_id = (known after apply) 2025-05-23 00:01:15.130343 | orchestrator | 00:01:15.130 STDOUT terraform:  } 2025-05-23 00:01:15.130491 | orchestrator | 00:01:15.130 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-05-23 00:01:15.130520 | orchestrator | 00:01:15.130 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-23 00:01:15.130533 | orchestrator | 00:01:15.130 STDOUT terraform:  + device = (known after apply) 2025-05-23 00:01:15.130542 | orchestrator | 00:01:15.130 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.130580 | orchestrator | 00:01:15.130 STDOUT terraform:  + instance_id = (known after apply) 2025-05-23 00:01:15.130612 | orchestrator | 00:01:15.130 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.130644 | orchestrator | 00:01:15.130 STDOUT terraform:  + volume_id = (known after apply) 2025-05-23 00:01:15.130655 | orchestrator | 00:01:15.130 STDOUT terraform:  } 2025-05-23 00:01:15.130715 | orchestrator | 00:01:15.130 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-05-23 00:01:15.130774 | orchestrator | 00:01:15.130 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-23 00:01:15.130808 | orchestrator | 00:01:15.130 STDOUT terraform:  + device = (known after apply) 2025-05-23 00:01:15.130843 | orchestrator | 00:01:15.130 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.130879 | orchestrator | 00:01:15.130 STDOUT terraform:  + instance_id = (known after apply) 2025-05-23 00:01:15.130915 | orchestrator | 00:01:15.130 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.130948 | orchestrator | 00:01:15.130 STDOUT terraform:  + volume_id = (known after apply) 2025-05-23 00:01:15.130958 | orchestrator | 00:01:15.130 STDOUT terraform:  } 2025-05-23 00:01:15.131028 | orchestrator | 00:01:15.130 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-05-23 00:01:15.131087 | orchestrator | 00:01:15.131 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-23 00:01:15.131120 | orchestrator | 00:01:15.131 STDOUT terraform:  + device = (known after apply) 2025-05-23 00:01:15.131170 | orchestrator | 00:01:15.131 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.131204 | orchestrator | 00:01:15.131 STDOUT terraform:  + instance_id = (known after apply) 2025-05-23 00:01:15.131258 | orchestrator | 00:01:15.131 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.131265 | orchestrator | 00:01:15.131 STDOUT terraform:  + volume_id = (known after apply) 2025-05-23 00:01:15.131273 | orchestrator | 00:01:15.131 STDOUT terraform:  } 2025-05-23 00:01:15.131334 | orchestrator | 00:01:15.131 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-05-23 00:01:15.131391 | orchestrator | 00:01:15.131 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-23 00:01:15.131426 | orchestrator | 00:01:15.131 STDOUT terraform:  + device = (known after apply) 2025-05-23 00:01:15.131460 | orchestrator | 00:01:15.131 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.131510 | orchestrator | 00:01:15.131 STDOUT terraform:  + instance_id = (known after apply) 2025-05-23 00:01:15.131526 | orchestrator | 00:01:15.131 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.131559 | orchestrator | 00:01:15.131 STDOUT terraform:  + volume_id = (known after apply) 2025-05-23 00:01:15.131565 | orchestrator | 00:01:15.131 STDOUT terraform:  } 2025-05-23 00:01:15.131634 | orchestrator | 00:01:15.131 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-05-23 00:01:15.131687 | orchestrator | 00:01:15.131 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-23 00:01:15.131722 | orchestrator | 00:01:15.131 STDOUT terraform:  + device = (known after apply) 2025-05-23 00:01:15.131756 | orchestrator | 00:01:15.131 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.131789 | orchestrator | 00:01:15.131 STDOUT terraform:  + instance_id = (known after apply) 2025-05-23 00:01:15.131823 | orchestrator | 00:01:15.131 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.131857 | orchestrator | 00:01:15.131 STDOUT terraform:  + volume_id = (known after apply) 2025-05-23 00:01:15.131863 | orchestrator | 00:01:15.131 STDOUT terraform:  } 2025-05-23 00:01:15.131938 | orchestrator | 00:01:15.131 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-05-23 00:01:15.132007 | orchestrator | 00:01:15.131 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-05-23 00:01:15.132041 | orchestrator | 00:01:15.132 STDOUT terraform:  + fixed_ip = (known after apply) 2025-05-23 00:01:15.132076 | orchestrator | 00:01:15.132 STDOUT terraform:  + floating_ip = (known after apply) 2025-05-23 00:01:15.132110 | orchestrator | 00:01:15.132 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.132179 | orchestrator | 00:01:15.132 STDOUT terraform:  + port_id = (known after apply) 2025-05-23 00:01:15.132214 | orchestrator | 00:01:15.132 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.132235 | orchestrator | 00:01:15.132 STDOUT terraform:  } 2025-05-23 00:01:15.132292 | orchestrator | 00:01:15.132 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-05-23 00:01:15.132351 | orchestrator | 00:01:15.132 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-05-23 00:01:15.132383 | orchestrator | 00:01:15.132 STDOUT terraform:  + address = (known after apply) 2025-05-23 00:01:15.132413 | orchestrator | 00:01:15.132 STDOUT terraform:  + all_tags = (known after apply) 2025-05-23 00:01:15.132444 | orchestrator | 00:01:15.132 STDOUT terraform:  + dns_domain = (known after apply) 2025-05-23 00:01:15.132478 | orchestrator | 00:01:15.132 STDOUT terraform:  + dns_name = (known after apply) 2025-05-23 00:01:15.132509 | orchestrator | 00:01:15.132 STDOUT terraform:  + fixed_ip = (known after apply) 2025-05-23 00:01:15.132540 | orchestrator | 00:01:15.132 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.132563 | orchestrator | 00:01:15.132 STDOUT terraform:  + pool = "public" 2025-05-23 00:01:15.132593 | orchestrator | 00:01:15.132 STDOUT terraform:  + port_id = (known after apply) 2025-05-23 00:01:15.132627 | orchestrator | 00:01:15.132 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.132657 | orchestrator | 00:01:15.132 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-23 00:01:15.132686 | orchestrator | 00:01:15.132 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-23 00:01:15.132692 | orchestrator | 00:01:15.132 STDOUT terraform:  } 2025-05-23 00:01:15.132751 | orchestrator | 00:01:15.132 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-05-23 00:01:15.132804 | orchestrator | 00:01:15.132 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-05-23 00:01:15.132849 | orchestrator | 00:01:15.132 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-23 00:01:15.132895 | orchestrator | 00:01:15.132 STDOUT terraform:  + all_tags = (known after apply) 2025-05-23 00:01:15.132922 | orchestrator | 00:01:15.132 STDOUT terraform:  + availability_zone_hints = [ 2025-05-23 00:01:15.132940 | orchestrator | 00:01:15.132 STDOUT terraform:  + "nova", 2025-05-23 00:01:15.132946 | orchestrator | 00:01:15.132 STDOUT terraform:  ] 2025-05-23 00:01:15.132994 | orchestrator | 00:01:15.132 STDOUT terraform:  + dns_domain = (known after apply) 2025-05-23 00:01:15.133040 | orchestrator | 00:01:15.132 STDOUT terraform:  + external = (known after apply) 2025-05-23 00:01:15.133084 | orchestrator | 00:01:15.133 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.133142 | orchestrator | 00:01:15.133 STDOUT terraform:  + mtu = (known after apply) 2025-05-23 00:01:15.133199 | orchestrator | 00:01:15.133 STDOUT terraform:  + name = "net-testbed-management" 2025-05-23 00:01:15.133233 | orchestrator | 00:01:15.133 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-23 00:01:15.133277 | orchestrator | 00:01:15.133 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-23 00:01:15.133321 | orchestrator | 00:01:15.133 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.133367 | orchestrator | 00:01:15.133 STDOUT terraform:  + shared = (known after apply) 2025-05-23 00:01:15.133411 | orchestrator | 00:01:15.133 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-23 00:01:15.133460 | orchestrator | 00:01:15.133 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-05-23 00:01:15.133485 | orchestrator | 00:01:15.133 STDOUT terraform:  + segments (known after apply) 2025-05-23 00:01:15.133492 | orchestrator | 00:01:15.133 STDOUT terraform:  } 2025-05-23 00:01:15.133553 | orchestrator | 00:01:15.133 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-05-23 00:01:15.133608 | orchestrator | 00:01:15.133 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-05-23 00:01:15.133651 | orchestrator | 00:01:15.133 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-23 00:01:15.133694 | orchestrator | 00:01:15.133 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-23 00:01:15.133737 | orchestrator | 00:01:15.133 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-23 00:01:15.133781 | orchestrator | 00:01:15.133 STDOUT terraform:  + all_tags = (known after apply) 2025-05-23 00:01:15.133825 | orchestrator | 00:01:15.133 STDOUT terraform:  + device_id = (known after apply) 2025-05-23 00:01:15.133869 | orchestrator | 00:01:15.133 STDOUT terraform:  + device_owner = (known after apply) 2025-05-23 00:01:15.133912 | orchestrator | 00:01:15.133 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-23 00:01:15.133956 | orchestrator | 00:01:15.133 STDOUT terraform:  + dns_name = (known after apply) 2025-05-23 00:01:15.134000 | orchestrator | 00:01:15.133 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.134082 | orchestrator | 00:01:15.133 STDOUT terraform:  + mac_address = (known after apply) 2025-05-23 00:01:15.134109 | orchestrator | 00:01:15.134 STDOUT terraform:  + network_id = (known after apply) 2025-05-23 00:01:15.134159 | orchestrator | 00:01:15.134 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-23 00:01:15.134204 | orchestrator | 00:01:15.134 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-23 00:01:15.134248 | orchestrator | 00:01:15.134 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.134291 | orchestrator | 00:01:15.134 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-23 00:01:15.134335 | orchestrator | 00:01:15.134 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-23 00:01:15.134361 | orchestrator | 00:01:15.134 STDOUT terraform:  + allowed_address_pairs { 2025-05-23 00:01:15.134397 | orchestrator | 00:01:15.134 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-23 00:01:15.134419 | orchestrator | 00:01:15.134 STDOUT terraform:  } 2025-05-23 00:01:15.134437 | orchestrator | 00:01:15.134 STDOUT terraform:  + allowed_address_pairs { 2025-05-23 00:01:15.134471 | orchestrator | 00:01:15.134 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-23 00:01:15.134477 | orchestrator | 00:01:15.134 STDOUT terraform:  } 2025-05-23 00:01:15.134515 | orchestrator | 00:01:15.134 STDOUT terraform:  + binding (known after apply) 2025-05-23 00:01:15.134521 | orchestrator | 00:01:15.134 STDOUT terraform:  + fixed_ip { 2025-05-23 00:01:15.134557 | orchestrator | 00:01:15.134 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-05-23 00:01:15.134592 | orchestrator | 00:01:15.134 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-23 00:01:15.134599 | orchestrator | 00:01:15.134 STDOUT terraform:  } 2025-05-23 00:01:15.134621 | orchestrator | 00:01:15.134 STDOUT terraform:  } 2025-05-23 00:01:15.134677 | orchestrator | 00:01:15.134 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-05-23 00:01:15.134732 | orchestrator | 00:01:15.134 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-23 00:01:15.134776 | orchestrator | 00:01:15.134 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-23 00:01:15.134821 | orchestrator | 00:01:15.134 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-23 00:01:15.134865 | orchestrator | 00:01:15.134 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-23 00:01:15.134910 | orchestrator | 00:01:15.134 STDOUT terraform:  + all_tags = (known after apply) 2025-05-23 00:01:15.134953 | orchestrator | 00:01:15.134 STDOUT terraform:  + device_id = (known after apply) 2025-05-23 00:01:15.134998 | orchestrator | 00:01:15.134 STDOUT terraform:  + device_owner = (known after apply) 2025-05-23 00:01:15.135043 | orchestrator | 00:01:15.134 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-23 00:01:15.135086 | orchestrator | 00:01:15.135 STDOUT terraform:  + dns_name = (known after apply) 2025-05-23 00:01:15.135140 | orchestrator | 00:01:15.135 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.135186 | orchestrator | 00:01:15.135 STDOUT terraform:  + mac_address = (known after apply) 2025-05-23 00:01:15.135231 | orchestrator | 00:01:15.135 STDOUT terraform:  + network_id = (known after apply) 2025-05-23 00:01:15.135274 | orchestrator | 00:01:15.135 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-23 00:01:15.135317 | orchestrator | 00:01:15.135 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-23 00:01:15.135363 | orchestrator | 00:01:15.135 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.135406 | orchestrator | 00:01:15.135 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-23 00:01:15.135450 | orchestrator | 00:01:15.135 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-23 00:01:15.135475 | orchestrator | 00:01:15.135 STDOUT terraform:  + allowed_address_pairs { 2025-05-23 00:01:15.135510 | orchestrator | 00:01:15.135 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-23 00:01:15.135516 | orchestrator | 00:01:15.135 STDOUT terraform:  } 2025-05-23 00:01:15.135546 | orchestrator | 00:01:15.135 STDOUT terraform:  + allowed_address_pairs { 2025-05-23 00:01:15.135581 | orchestrator | 00:01:15.135 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-23 00:01:15.135587 | orchestrator | 00:01:15.135 STDOUT terraform:  } 2025-05-23 00:01:15.135618 | orchestrator | 00:01:15.135 STDOUT terraform:  + allowed_address_pairs { 2025-05-23 00:01:15.135652 | orchestrator | 00:01:15.135 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-23 00:01:15.135658 | orchestrator | 00:01:15.135 STDOUT terraform:  } 2025-05-23 00:01:15.135688 | orchestrator | 00:01:15.135 STDOUT terraform:  + allowed_address_pairs { 2025-05-23 00:01:15.135725 | orchestrator | 00:01:15.135 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-23 00:01:15.135747 | orchestrator | 00:01:15.135 STDOUT terraform:  } 2025-05-23 00:01:15.135773 | orchestrator | 00:01:15.135 STDOUT terraform:  + binding (known after apply) 2025-05-23 00:01:15.135780 | orchestrator | 00:01:15.135 STDOUT terraform:  + fixed_ip { 2025-05-23 00:01:15.135817 | orchestrator | 00:01:15.135 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-05-23 00:01:15.135853 | orchestrator | 00:01:15.135 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-23 00:01:15.135859 | orchestrator | 00:01:15.135 STDOUT terraform:  } 2025-05-23 00:01:15.135881 | orchestrator | 00:01:15.135 STDOUT terraform:  } 2025-05-23 00:01:15.135937 | orchestrator | 00:01:15.135 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-05-23 00:01:15.135990 | orchestrator | 00:01:15.135 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-23 00:01:15.136035 | orchestrator | 00:01:15.135 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-23 00:01:15.136077 | orchestrator | 00:01:15.136 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-23 00:01:15.136120 | orchestrator | 00:01:15.136 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-23 00:01:15.136194 | orchestrator | 00:01:15.136 STDOUT terraform:  + all_tags = (known after apply) 2025-05-23 00:01:15.136231 | orchestrator | 00:01:15.136 STDOUT terraform:  + device_id = (known after apply) 2025-05-23 00:01:15.136300 | orchestrator | 00:01:15.136 STDOUT terraform:  + device_owner = (known after apply) 2025-05-23 00:01:15.136318 | orchestrator | 00:01:15.136 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-23 00:01:15.136362 | orchestrator | 00:01:15.136 STDOUT terraform:  + dns_name = (known after apply) 2025-05-23 00:01:15.136406 | orchestrator | 00:01:15.136 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.136449 | orchestrator | 00:01:15.136 STDOUT terraform:  + mac_address = (known after apply) 2025-05-23 00:01:15.136494 | orchestrator | 00:01:15.136 STDOUT terraform:  + network_id = (known after apply) 2025-05-23 00:01:15.136537 | orchestrator | 00:01:15.136 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-23 00:01:15.136581 | orchestrator | 00:01:15.136 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-23 00:01:15.136626 | orchestrator | 00:01:15.136 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.136669 | orchestrator | 00:01:15.136 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-23 00:01:15.136713 | orchestrator | 00:01:15.136 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-23 00:01:15.136737 | orchestrator | 00:01:15.136 STDOUT terraform:  + allowed_address_pairs { 2025-05-23 00:01:15.136772 | orchestrator | 00:01:15.136 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-23 00:01:15.136778 | orchestrator | 00:01:15.136 STDOUT terraform:  } 2025-05-23 00:01:15.136809 | orchestrator | 00:01:15.136 STDOUT terraform:  + allowed_address_pairs { 2025-05-23 00:01:15.136884 | orchestrator | 00:01:15.136 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-23 00:01:15.136891 | orchestrator | 00:01:15.136 STDOUT terraform:  } 2025-05-23 00:01:15.136920 | orchestrator | 00:01:15.136 STDOUT terraform:  + allowed_address_pairs { 2025-05-23 00:01:15.136961 | orchestrator | 00:01:15.136 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-23 00:01:15.136967 | orchestrator | 00:01:15.136 STDOUT terraform:  } 2025-05-23 00:01:15.136993 | orchestrator | 00:01:15.136 STDOUT terraform:  + allowed_address_pairs { 2025-05-23 00:01:15.137027 | orchestrator | 00:01:15.136 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-23 00:01:15.137034 | orchestrator | 00:01:15.137 STDOUT terraform:  } 2025-05-23 00:01:15.137068 | orchestrator | 00:01:15.137 STDOUT terraform:  + binding (known after apply) 2025-05-23 00:01:15.137090 | orchestrator | 00:01:15.137 STDOUT terraform:  + fixed_ip { 2025-05-23 00:01:15.137137 | orchestrator | 00:01:15.137 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-05-23 00:01:15.137184 | orchestrator | 00:01:15.137 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-23 00:01:15.137191 | orchestrator | 00:01:15.137 STDOUT terraform:  } 2025-05-23 00:01:15.137214 | orchestrator | 00:01:15.137 STDOUT terraform:  } 2025-05-23 00:01:15.137274 | orchestrator | 00:01:15.137 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-05-23 00:01:15.137325 | orchestrator | 00:01:15.137 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-23 00:01:15.137369 | orchestrator | 00:01:15.137 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-23 00:01:15.137412 | orchestrator | 00:01:15.137 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-23 00:01:15.137456 | orchestrator | 00:01:15.137 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-23 00:01:15.137507 | orchestrator | 00:01:15.137 STDOUT terraform:  + all_tags = (known after apply) 2025-05-23 00:01:15.137551 | orchestrator | 00:01:15.137 STDOUT terraform:  + device_id = (known after apply) 2025-05-23 00:01:15.137595 | orchestrator | 00:01:15.137 STDOUT terraform:  + device_owner = (known after apply) 2025-05-23 00:01:15.137644 | orchestrator | 00:01:15.137 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-23 00:01:15.137689 | orchestrator | 00:01:15.137 STDOUT terraform:  + dns_name = (known after apply) 2025-05-23 00:01:15.137734 | orchestrator | 00:01:15.137 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.137776 | orchestrator | 00:01:15.137 STDOUT terraform:  + mac_address = (known after apply) 2025-05-23 00:01:15.137823 | orchestrator | 00:01:15.137 STDOUT terraform:  + network_id = (known after apply) 2025-05-23 00:01:15.137864 | orchestrator | 00:01:15.137 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-23 00:01:15.137910 | orchestrator | 00:01:15.137 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-23 00:01:15.137954 | orchestrator | 00:01:15.137 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.137997 | orchestrator | 00:01:15.137 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-23 00:01:15.138062 | orchestrator | 00:01:15.137 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-23 00:01:15.138081 | orchestrator | 00:01:15.138 STDOUT terraform:  + allowed_address_pairs { 2025-05-23 00:01:15.138117 | orchestrator | 00:01:15.138 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-23 00:01:15.138135 | orchestrator | 00:01:15.138 STDOUT terraform:  } 2025-05-23 00:01:15.138288 | orchestrator | 00:01:15.138 STDOUT terraform:  + allowed_address_pairs { 2025-05-23 00:01:15.138346 | orchestrator | 00:01:15.138 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-23 00:01:15.138360 | orchestrator | 00:01:15.138 STDOUT terraform:  } 2025-05-23 00:01:15.138387 | orchestrator | 00:01:15.138 STDOUT terraform:  + allowed_address_pairs { 2025-05-23 00:01:15.138398 | orchestrator | 00:01:15.138 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-23 00:01:15.138418 | orchestrator | 00:01:15.138 STDOUT terraform:  } 2025-05-23 00:01:15.138429 | orchestrator | 00:01:15.138 STDOUT terraform:  + allowed_address_pairs { 2025-05-23 00:01:15.138440 | orchestrator | 00:01:15.138 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-23 00:01:15.138451 | orchestrator | 00:01:15.138 STDOUT terraform:  } 2025-05-23 00:01:15.138462 | orchestrator | 00:01:15.138 STDOUT terraform:  + binding (known after apply) 2025-05-23 00:01:15.138472 | orchestrator | 00:01:15.138 STDOUT terraform:  + fixed_ip { 2025-05-23 00:01:15.138483 | orchestrator | 00:01:15.138 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-05-23 00:01:15.138498 | orchestrator | 00:01:15.138 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-23 00:01:15.138509 | orchestrator | 00:01:15.138 STDOUT terraform:  } 2025-05-23 00:01:15.138520 | orchestrator | 00:01:15.138 STDOUT terraform:  } 2025-05-23 00:01:15.138532 | orchestrator | 00:01:15.138 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-05-23 00:01:15.138547 | orchestrator | 00:01:15.138 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-23 00:01:15.138615 | orchestrator | 00:01:15.138 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-23 00:01:15.138633 | orchestrator | 00:01:15.138 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-23 00:01:15.138685 | orchestrator | 00:01:15.138 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-23 00:01:15.138725 | orchestrator | 00:01:15.138 STDOUT terraform:  + all_tags = (known after apply) 2025-05-23 00:01:15.138763 | orchestrator | 00:01:15.138 STDOUT terraform:  + device_id = (known after apply) 2025-05-23 00:01:15.138801 | orchestrator | 00:01:15.138 STDOUT terraform:  + device_owner = (known after apply) 2025-05-23 00:01:15.138850 | orchestrator | 00:01:15.138 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-23 00:01:15.138889 | orchestrator | 00:01:15.138 STDOUT terraform:  + dns_name = (known after apply) 2025-05-23 00:01:15.138938 | orchestrator | 00:01:15.138 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.138978 | orchestrator | 00:01:15.138 STDOUT terraform:  + mac_address = (known after apply) 2025-05-23 00:01:15.139015 | orchestrator | 00:01:15.138 STDOUT terraform:  + network_id = (known after apply) 2025-05-23 00:01:15.139054 | orchestrator | 00:01:15.138 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-23 00:01:15.139092 | orchestrator | 00:01:15.139 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-23 00:01:15.139195 | orchestrator | 00:01:15.139 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.139219 | orchestrator | 00:01:15.139 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-23 00:01:15.139235 | orchestrator | 00:01:15.139 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-23 00:01:15.139258 | orchestrator | 00:01:15.139 STDOUT terraform:  + allowed_address_pairs { 2025-05-23 00:01:15.139274 | orchestrator | 00:01:15.139 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-23 00:01:15.139289 | orchestrator | 00:01:15.139 STDOUT terraform:  } 2025-05-23 00:01:15.139340 | orchestrator | 00:01:15.139 STDOUT terraform:  + allowed_address_pairs { 2025-05-23 00:01:15.139357 | orchestrator | 00:01:15.139 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-23 00:01:15.139368 | orchestrator | 00:01:15.139 STDOUT terraform:  } 2025-05-23 00:01:15.139382 | orchestrator | 00:01:15.139 STDOUT terraform:  + allowed_address_pairs { 2025-05-23 00:01:15.139420 | orchestrator | 00:01:15.139 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-23 00:01:15.139433 | orchestrator | 00:01:15.139 STDOUT terraform:  } 2025-05-23 00:01:15.139448 | orchestrator | 00:01:15.139 STDOUT terraform:  + allowed_address_pairs { 2025-05-23 00:01:15.139496 | orchestrator | 00:01:15.139 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-23 00:01:15.139509 | orchestrator | 00:01:15.139 STDOUT terraform:  } 2025-05-23 00:01:15.139524 | orchestrator | 00:01:15.139 STDOUT terraform:  + binding (known after apply) 2025-05-23 00:01:15.139539 | orchestrator | 00:01:15.139 STDOUT terraform:  + fixed_ip { 2025-05-23 00:01:15.139576 | orchestrator | 00:01:15.139 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-05-23 00:01:15.139593 | orchestrator | 00:01:15.139 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-23 00:01:15.139607 | orchestrator | 00:01:15.139 STDOUT terraform:  } 2025-05-23 00:01:15.139622 | orchestrator | 00:01:15.139 STDOUT terraform:  } 2025-05-23 00:01:15.139692 | orchestrator | 00:01:15.139 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-05-23 00:01:15.139746 | orchestrator | 00:01:15.139 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-23 00:01:15.139785 | orchestrator | 00:01:15.139 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-23 00:01:15.139828 | orchestrator | 00:01:15.139 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-23 00:01:15.139854 | orchestrator | 00:01:15.139 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-23 00:01:15.139912 | orchestrator | 00:01:15.139 STDOUT terraform:  + all_tags = (known after apply) 2025-05-23 00:01:15.139931 | orchestrator | 00:01:15.139 STDOUT terraform:  + device_id = (known after apply) 2025-05-23 00:01:15.139987 | orchestrator | 00:01:15.139 STDOUT terraform:  + device_owner = (known after apply) 2025-05-23 00:01:15.140027 | orchestrator | 00:01:15.139 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-23 00:01:15.140066 | orchestrator | 00:01:15.140 STDOUT terraform:  + dns_name = (known after apply) 2025-05-23 00:01:15.140104 | orchestrator | 00:01:15.140 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.140179 | orchestrator | 00:01:15.140 STDOUT terraform:  + mac_address = (known after apply) 2025-05-23 00:01:15.140197 | orchestrator | 00:01:15.140 STDOUT terraform:  + network_id = (known after apply) 2025-05-23 00:01:15.140252 | orchestrator | 00:01:15.140 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-23 00:01:15.140291 | orchestrator | 00:01:15.140 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-23 00:01:15.140330 | orchestrator | 00:01:15.140 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.140368 | orchestrator | 00:01:15.140 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-23 00:01:15.140417 | orchestrator | 00:01:15.140 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-23 00:01:15.140433 | orchestrator | 00:01:15.140 STDOUT terraform:  + allowed_address_pairs { 2025-05-23 00:01:15.140472 | orchestrator | 00:01:15.140 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-23 00:01:15.140485 | orchestrator | 00:01:15.140 STDOUT terraform:  } 2025-05-23 00:01:15.140499 | orchestrator | 00:01:15.140 STDOUT terraform:  + allowed_address_pairs { 2025-05-23 00:01:15.140538 | orchestrator | 00:01:15.140 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-23 00:01:15.140550 | orchestrator | 00:01:15.140 STDOUT terraform:  } 2025-05-23 00:01:15.140565 | orchestrator | 00:01:15.140 STDOUT terraform:  + allowed_address_pairs { 2025-05-23 00:01:15.140598 | orchestrator | 00:01:15.140 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-23 00:01:15.140614 | orchestrator | 00:01:15.140 STDOUT terraform:  } 2025-05-23 00:01:15.140629 | orchestrator | 00:01:15.140 STDOUT terraform:  + allowed_address_pairs { 2025-05-23 00:01:15.140668 | orchestrator | 00:01:15.140 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-23 00:01:15.140681 | orchestrator | 00:01:15.140 STDOUT terraform:  } 2025-05-23 00:01:15.140695 | orchestrator | 00:01:15.140 STDOUT terraform:  + binding (known after apply) 2025-05-23 00:01:15.140709 | orchestrator | 00:01:15.140 STDOUT terraform:  + fixed_ip { 2025-05-23 00:01:15.140724 | orchestrator | 00:01:15.140 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-05-23 00:01:15.140847 | orchestrator | 00:01:15.140 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-23 00:01:15.140870 | orchestrator | 00:01:15.140 STDOUT terraform:  } 2025-05-23 00:01:15.140876 | orchestrator | 00:01:15.140 STDOUT terraform:  } 2025-05-23 00:01:15.140885 | orchestrator | 00:01:15.140 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-05-23 00:01:15.140905 | orchestrator | 00:01:15.140 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-23 00:01:15.140948 | orchestrator | 00:01:15.140 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-23 00:01:15.140989 | orchestrator | 00:01:15.140 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-23 00:01:15.141031 | orchestrator | 00:01:15.140 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-23 00:01:15.141074 | orchestrator | 00:01:15.141 STDOUT terraform:  + all_tags = (known after apply) 2025-05-23 00:01:15.141118 | orchestrator | 00:01:15.141 STDOUT terraform:  + device_id = (known after apply) 2025-05-23 00:01:15.141170 | orchestrator | 00:01:15.141 STDOUT terraform:  + device_owner = (known after apply) 2025-05-23 00:01:15.141217 | orchestrator | 00:01:15.141 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-23 00:01:15.141258 | orchestrator | 00:01:15.141 STDOUT terraform:  + dns_name = (known after apply) 2025-05-23 00:01:15.141302 | orchestrator | 00:01:15.141 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.141345 | orchestrator | 00:01:15.141 STDOUT terraform:  + mac_address = (known after apply) 2025-05-23 00:01:15.141389 | orchestrator | 00:01:15.141 STDOUT terraform:  + network_id = (known after apply) 2025-05-23 00:01:15.141432 | orchestrator | 00:01:15.141 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-23 00:01:15.141475 | orchestrator | 00:01:15.141 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-23 00:01:15.141519 | orchestrator | 00:01:15.141 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.141561 | orchestrator | 00:01:15.141 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-23 00:01:15.141606 | orchestrator | 00:01:15.141 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-23 00:01:15.141631 | orchestrator | 00:01:15.141 STDOUT terraform:  + allowed_address_pairs { 2025-05-23 00:01:15.141666 | orchestrator | 00:01:15.141 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-23 00:01:15.141672 | orchestrator | 00:01:15.141 STDOUT terraform:  } 2025-05-23 00:01:15.141702 | orchestrator | 00:01:15.141 STDOUT terraform:  + allowed_address_pairs { 2025-05-23 00:01:15.141738 | orchestrator | 00:01:15.141 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-23 00:01:15.141744 | orchestrator | 00:01:15.141 STDOUT terraform:  } 2025-05-23 00:01:15.141774 | orchestrator | 00:01:15.141 STDOUT terraform:  + allowed_address_pairs { 2025-05-23 00:01:15.141808 | orchestrator | 00:01:15.141 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-23 00:01:15.141814 | orchestrator | 00:01:15.141 STDOUT terraform:  } 2025-05-23 00:01:15.141843 | orchestrator | 00:01:15.141 STDOUT terraform:  + allowed_address_pairs { 2025-05-23 00:01:15.141877 | orchestrator | 00:01:15.141 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-23 00:01:15.141883 | orchestrator | 00:01:15.141 STDOUT terraform:  } 2025-05-23 00:01:15.141918 | orchestrator | 00:01:15.141 STDOUT terraform:  + binding (known after apply) 2025-05-23 00:01:15.141924 | orchestrator | 00:01:15.141 STDOUT terraform:  + fixed_ip { 2025-05-23 00:01:15.141978 | orchestrator | 00:01:15.141 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-05-23 00:01:15.141985 | orchestrator | 00:01:15.141 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-23 00:01:15.142008 | orchestrator | 00:01:15.141 STDOUT terraform:  } 2025-05-23 00:01:15.142027 | orchestrator | 00:01:15.142 STDOUT terraform:  } 2025-05-23 00:01:15.142091 | orchestrator | 00:01:15.142 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-05-23 00:01:15.142159 | orchestrator | 00:01:15.142 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-05-23 00:01:15.142183 | orchestrator | 00:01:15.142 STDOUT terraform:  + force_destroy = false 2025-05-23 00:01:15.142219 | orchestrator | 00:01:15.142 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.142255 | orchestrator | 00:01:15.142 STDOUT terraform:  + port_id = (known after apply) 2025-05-23 00:01:15.142292 | orchestrator | 00:01:15.142 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.142327 | orchestrator | 00:01:15.142 STDOUT terraform:  + router_id = (known after apply) 2025-05-23 00:01:15.142362 | orchestrator | 00:01:15.142 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-23 00:01:15.142379 | orchestrator | 00:01:15.142 STDOUT terraform:  } 2025-05-23 00:01:15.142397 | orchestrator | 00:01:15.142 STDOUT terraform:  # openstack_networ 2025-05-23 00:01:15.142471 | orchestrator | 00:01:15.142 STDOUT terraform: king_router_v2.router will be created 2025-05-23 00:01:15.142520 | orchestrator | 00:01:15.142 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-05-23 00:01:15.142561 | orchestrator | 00:01:15.142 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-23 00:01:15.142604 | orchestrator | 00:01:15.142 STDOUT terraform:  + all_tags = (known after apply) 2025-05-23 00:01:15.142633 | orchestrator | 00:01:15.142 STDOUT terraform:  + availability_zone_hints = [ 2025-05-23 00:01:15.142651 | orchestrator | 00:01:15.142 STDOUT terraform:  + "nova", 2025-05-23 00:01:15.142669 | orchestrator | 00:01:15.142 STDOUT terraform:  ] 2025-05-23 00:01:15.142714 | orchestrator | 00:01:15.142 STDOUT terraform:  + distributed = (known after apply) 2025-05-23 00:01:15.142758 | orchestrator | 00:01:15.142 STDOUT terraform:  + enable_snat = (known after apply) 2025-05-23 00:01:15.142818 | orchestrator | 00:01:15.142 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-05-23 00:01:15.142862 | orchestrator | 00:01:15.142 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.142898 | orchestrator | 00:01:15.142 STDOUT terraform:  + name = "testbed" 2025-05-23 00:01:15.142942 | orchestrator | 00:01:15.142 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.142987 | orchestrator | 00:01:15.142 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-23 00:01:15.143023 | orchestrator | 00:01:15.142 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-05-23 00:01:15.143040 | orchestrator | 00:01:15.143 STDOUT terraform:  } 2025-05-23 00:01:15.143110 | orchestrator | 00:01:15.143 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-05-23 00:01:15.143208 | orchestrator | 00:01:15.143 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-05-23 00:01:15.143228 | orchestrator | 00:01:15.143 STDOUT terraform:  + description = "ssh" 2025-05-23 00:01:15.143258 | orchestrator | 00:01:15.143 STDOUT terraform:  + direction = "ingress" 2025-05-23 00:01:15.143286 | orchestrator | 00:01:15.143 STDOUT terraform:  + ethertype = "IPv4" 2025-05-23 00:01:15.143324 | orchestrator | 00:01:15.143 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.143350 | orchestrator | 00:01:15.143 STDOUT terraform:  + port_range_max = 22 2025-05-23 00:01:15.143373 | orchestrator | 00:01:15.143 STDOUT terraform:  + port_range_min = 22 2025-05-23 00:01:15.143399 | orchestrator | 00:01:15.143 STDOUT terraform:  + protocol = "tcp" 2025-05-23 00:01:15.143435 | orchestrator | 00:01:15.143 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.143471 | orchestrator | 00:01:15.143 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-23 00:01:15.143500 | orchestrator | 00:01:15.143 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-23 00:01:15.143538 | orchestrator | 00:01:15.143 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-23 00:01:15.143579 | orchestrator | 00:01:15.143 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-23 00:01:15.143595 | orchestrator | 00:01:15.143 STDOUT terraform:  } 2025-05-23 00:01:15.143658 | orchestrator | 00:01:15.143 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-05-23 00:01:15.143723 | orchestrator | 00:01:15.143 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-05-23 00:01:15.143754 | orchestrator | 00:01:15.143 STDOUT terraform:  + description = "wireguard" 2025-05-23 00:01:15.143784 | orchestrator | 00:01:15.143 STDOUT terraform:  + direction = "ingress" 2025-05-23 00:01:15.143810 | orchestrator | 00:01:15.143 STDOUT terraform:  + ethertype = "IPv4" 2025-05-23 00:01:15.143848 | orchestrator | 00:01:15.143 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.143875 | orchestrator | 00:01:15.143 STDOUT terraform:  + port_range_max = 51820 2025-05-23 00:01:15.143901 | orchestrator | 00:01:15.143 STDOUT terraform:  + port_range_min = 51820 2025-05-23 00:01:15.143927 | orchestrator | 00:01:15.143 STDOUT terraform:  + protocol = "udp" 2025-05-23 00:01:15.143964 | orchestrator | 00:01:15.143 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.144002 | orchestrator | 00:01:15.143 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-23 00:01:15.144031 | orchestrator | 00:01:15.143 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-23 00:01:15.144067 | orchestrator | 00:01:15.144 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-23 00:01:15.144104 | orchestrator | 00:01:15.144 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-23 00:01:15.150445 | orchestrator | 00:01:15.144 STDOUT terraform:  } 2025-05-23 00:01:15.150486 | orchestrator | 00:01:15.144 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-05-23 00:01:15.150495 | orchestrator | 00:01:15.144 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-05-23 00:01:15.150501 | orchestrator | 00:01:15.144 STDOUT terraform:  + direction = "ingress" 2025-05-23 00:01:15.150508 | orchestrator | 00:01:15.144 STDOUT terraform:  + ethertype = "IPv4" 2025-05-23 00:01:15.150514 | orchestrator | 00:01:15.144 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.150520 | orchestrator | 00:01:15.144 STDOUT terraform:  + protocol = "tcp" 2025-05-23 00:01:15.150527 | orchestrator | 00:01:15.144 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.150544 | orchestrator | 00:01:15.144 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-23 00:01:15.150550 | orchestrator | 00:01:15.144 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-05-23 00:01:15.150556 | orchestrator | 00:01:15.144 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-23 00:01:15.150562 | orchestrator | 00:01:15.144 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-23 00:01:15.150568 | orchestrator | 00:01:15.144 STDOUT terraform:  } 2025-05-23 00:01:15.150575 | orchestrator | 00:01:15.144 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-05-23 00:01:15.150581 | orchestrator | 00:01:15.144 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-05-23 00:01:15.150586 | orchestrator | 00:01:15.144 STDOUT terraform:  + direction = "ingress" 2025-05-23 00:01:15.150592 | orchestrator | 00:01:15.144 STDOUT terraform:  + ethertype = "IPv4" 2025-05-23 00:01:15.150605 | orchestrator | 00:01:15.144 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.150612 | orchestrator | 00:01:15.144 STDOUT terraform:  + protocol = "udp" 2025-05-23 00:01:15.150618 | orchestrator | 00:01:15.144 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.150623 | orchestrator | 00:01:15.144 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-23 00:01:15.150629 | orchestrator | 00:01:15.144 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-05-23 00:01:15.150635 | orchestrator | 00:01:15.144 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-23 00:01:15.150641 | orchestrator | 00:01:15.144 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-23 00:01:15.150647 | orchestrator | 00:01:15.144 STDOUT terraform:  } 2025-05-23 00:01:15.150654 | orchestrator | 00:01:15.144 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-05-23 00:01:15.150660 | orchestrator | 00:01:15.144 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-05-23 00:01:15.150666 | orchestrator | 00:01:15.145 STDOUT terraform:  + direction = "ingress" 2025-05-23 00:01:15.150672 | orchestrator | 00:01:15.145 STDOUT terraform:  + ethertype = "IPv4" 2025-05-23 00:01:15.150678 | orchestrator | 00:01:15.145 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.150687 | orchestrator | 00:01:15.145 STDOUT terraform:  + protocol = "icmp" 2025-05-23 00:01:15.150694 | orchestrator | 00:01:15.145 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.150700 | orchestrator | 00:01:15.145 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-23 00:01:15.150706 | orchestrator | 00:01:15.145 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-23 00:01:15.150712 | orchestrator | 00:01:15.145 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-23 00:01:15.150718 | orchestrator | 00:01:15.145 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-23 00:01:15.150739 | orchestrator | 00:01:15.145 STDOUT terraform:  } 2025-05-23 00:01:15.150752 | orchestrator | 00:01:15.145 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-05-23 00:01:15.150758 | orchestrator | 00:01:15.145 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-05-23 00:01:15.150764 | orchestrator | 00:01:15.145 STDOUT terraform:  + direction = "ingress" 2025-05-23 00:01:15.150770 | orchestrator | 00:01:15.145 STDOUT terraform:  + ethertype = "IPv4" 2025-05-23 00:01:15.150777 | orchestrator | 00:01:15.145 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.150783 | orchestrator | 00:01:15.145 STDOUT terraform:  + protocol = "tcp" 2025-05-23 00:01:15.150790 | orchestrator | 00:01:15.145 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.150796 | orchestrator | 00:01:15.145 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-23 00:01:15.150803 | orchestrator | 00:01:15.145 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-23 00:01:15.150810 | orchestrator | 00:01:15.145 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-23 00:01:15.150817 | orchestrator | 00:01:15.145 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-23 00:01:15.150823 | orchestrator | 00:01:15.145 STDOUT terraform:  } 2025-05-23 00:01:15.150830 | orchestrator | 00:01:15.145 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-05-23 00:01:15.150836 | orchestrator | 00:01:15.145 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-05-23 00:01:15.150843 | orchestrator | 00:01:15.145 STDOUT terraform:  + direction = "ingress" 2025-05-23 00:01:15.150849 | orchestrator | 00:01:15.145 STDOUT terraform:  + ethertype = "IPv4" 2025-05-23 00:01:15.150856 | orchestrator | 00:01:15.145 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.150862 | orchestrator | 00:01:15.145 STDOUT terraform:  + protocol = "udp" 2025-05-23 00:01:15.150869 | orchestrator | 00:01:15.145 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.150875 | orchestrator | 00:01:15.145 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-23 00:01:15.150881 | orchestrator | 00:01:15.145 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-23 00:01:15.150887 | orchestrator | 00:01:15.145 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-23 00:01:15.150894 | orchestrator | 00:01:15.145 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-23 00:01:15.150900 | orchestrator | 00:01:15.145 STDOUT terraform:  } 2025-05-23 00:01:15.150907 | orchestrator | 00:01:15.146 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-05-23 00:01:15.150914 | orchestrator | 00:01:15.146 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-05-23 00:01:15.150920 | orchestrator | 00:01:15.146 STDOUT terraform:  + direction = "ingress" 2025-05-23 00:01:15.150927 | orchestrator | 00:01:15.146 STDOUT terraform:  + ethertype = "IPv4" 2025-05-23 00:01:15.150933 | orchestrator | 00:01:15.146 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.150945 | orchestrator | 00:01:15.146 STDOUT terraform:  + protocol = "icmp" 2025-05-23 00:01:15.150956 | orchestrator | 00:01:15.146 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.150962 | orchestrator | 00:01:15.146 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-23 00:01:15.150968 | orchestrator | 00:01:15.146 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-23 00:01:15.150975 | orchestrator | 00:01:15.146 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-23 00:01:15.150981 | orchestrator | 00:01:15.146 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-23 00:01:15.150988 | orchestrator | 00:01:15.146 STDOUT terraform:  } 2025-05-23 00:01:15.150999 | orchestrator | 00:01:15.146 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-05-23 00:01:15.151006 | orchestrator | 00:01:15.146 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-05-23 00:01:15.151012 | orchestrator | 00:01:15.146 STDOUT terraform:  + description = "vrrp" 2025-05-23 00:01:15.151018 | orchestrator | 00:01:15.146 STDOUT terraform:  + direction = "ingress" 2025-05-23 00:01:15.151025 | orchestrator | 00:01:15.146 STDOUT terraform:  + ethertype = "IPv4" 2025-05-23 00:01:15.151032 | orchestrator | 00:01:15.147 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.151039 | orchestrator | 00:01:15.147 STDOUT terraform:  + protocol = "112" 2025-05-23 00:01:15.151046 | orchestrator | 00:01:15.147 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.151053 | orchestrator | 00:01:15.147 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-23 00:01:15.151060 | orchestrator | 00:01:15.147 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-23 00:01:15.151067 | orchestrator | 00:01:15.147 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-23 00:01:15.151074 | orchestrator | 00:01:15.147 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-23 00:01:15.151081 | orchestrator | 00:01:15.147 STDOUT terraform:  } 2025-05-23 00:01:15.151088 | orchestrator | 00:01:15.147 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-05-23 00:01:15.151095 | orchestrator | 00:01:15.147 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-05-23 00:01:15.151101 | orchestrator | 00:01:15.147 STDOUT terraform:  + all_tags = (known after apply) 2025-05-23 00:01:15.151108 | orchestrator | 00:01:15.147 STDOUT terraform:  + description = "management security group" 2025-05-23 00:01:15.151115 | orchestrator | 00:01:15.147 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.151121 | orchestrator | 00:01:15.147 STDOUT terraform:  + name = "testbed-management" 2025-05-23 00:01:15.151171 | orchestrator | 00:01:15.147 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.151177 | orchestrator | 00:01:15.147 STDOUT terraform:  + stateful = (known after apply) 2025-05-23 00:01:15.151185 | orchestrator | 00:01:15.147 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-23 00:01:15.151199 | orchestrator | 00:01:15.147 STDOUT terraform:  } 2025-05-23 00:01:15.151205 | orchestrator | 00:01:15.148 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-05-23 00:01:15.151212 | orchestrator | 00:01:15.148 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-05-23 00:01:15.151219 | orchestrator | 00:01:15.148 STDOUT terraform:  + all_tags = (known after apply) 2025-05-23 00:01:15.151226 | orchestrator | 00:01:15.148 STDOUT terraform:  + description = "node security group" 2025-05-23 00:01:15.151232 | orchestrator | 00:01:15.148 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.151238 | orchestrator | 00:01:15.148 STDOUT terraform:  + name = "testbed-node" 2025-05-23 00:01:15.151245 | orchestrator | 00:01:15.148 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.151251 | orchestrator | 00:01:15.148 STDOUT terraform:  + stateful = (known after apply) 2025-05-23 00:01:15.151258 | orchestrator | 00:01:15.148 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-23 00:01:15.151272 | orchestrator | 00:01:15.148 STDOUT terraform:  } 2025-05-23 00:01:15.151279 | orchestrator | 00:01:15.148 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-05-23 00:01:15.151285 | orchestrator | 00:01:15.148 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-05-23 00:01:15.151291 | orchestrator | 00:01:15.148 STDOUT terraform:  + all_tags = (known after apply) 2025-05-23 00:01:15.151297 | orchestrator | 00:01:15.148 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-05-23 00:01:15.151303 | orchestrator | 00:01:15.148 STDOUT terraform:  + dns_nameservers = [ 2025-05-23 00:01:15.151316 | orchestrator | 00:01:15.148 STDOUT terraform:  + "8.8.8.8", 2025-05-23 00:01:15.151323 | orchestrator | 00:01:15.148 STDOUT terraform:  + "9.9.9.9", 2025-05-23 00:01:15.151329 | orchestrator | 00:01:15.148 STDOUT terraform:  ] 2025-05-23 00:01:15.151336 | orchestrator | 00:01:15.148 STDOUT terraform:  + enable_dhcp = true 2025-05-23 00:01:15.151343 | orchestrator | 00:01:15.148 STDOUT terraform:  + gateway_ip = (known after apply) 2025-05-23 00:01:15.151350 | orchestrator | 00:01:15.149 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.151356 | orchestrator | 00:01:15.149 STDOUT terraform:  + ip_version = 4 2025-05-23 00:01:15.151362 | orchestrator | 00:01:15.149 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-05-23 00:01:15.151369 | orchestrator | 00:01:15.149 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-05-23 00:01:15.151375 | orchestrator | 00:01:15.149 STDOUT terraform:  + name = "subnet-testbed-management" 2025-05-23 00:01:15.151381 | orchestrator | 00:01:15.149 STDOUT terraform:  + network_id = (known after apply) 2025-05-23 00:01:15.151387 | orchestrator | 00:01:15.149 STDOUT terraform:  + no_gateway = false 2025-05-23 00:01:15.151393 | orchestrator | 00:01:15.149 STDOUT terraform:  + region = (known after apply) 2025-05-23 00:01:15.151399 | orchestrator | 00:01:15.149 STDOUT terraform:  + service_types = (known after apply) 2025-05-23 00:01:15.151405 | orchestrator | 00:01:15.149 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-23 00:01:15.151419 | orchestrator | 00:01:15.149 STDOUT terraform:  + allocation_pool { 2025-05-23 00:01:15.151427 | orchestrator | 00:01:15.149 STDOUT terraform:  + end = "192.168.31.250" 2025-05-23 00:01:15.151433 | orchestrator | 00:01:15.149 STDOUT terraform:  + start = "192.168.31.200" 2025-05-23 00:01:15.151439 | orchestrator | 00:01:15.149 STDOUT terraform:  } 2025-05-23 00:01:15.151449 | orchestrator | 00:01:15.149 STDOUT terraform:  } 2025-05-23 00:01:15.151455 | orchestrator | 00:01:15.149 STDOUT terraform:  # terraform_data.image will be created 2025-05-23 00:01:15.151461 | orchestrator | 00:01:15.149 STDOUT terraform:  + resource "terraform_data" "image" { 2025-05-23 00:01:15.151467 | orchestrator | 00:01:15.149 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.151473 | orchestrator | 00:01:15.149 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-05-23 00:01:15.151479 | orchestrator | 00:01:15.149 STDOUT terraform:  + output = (known after apply) 2025-05-23 00:01:15.151485 | orchestrator | 00:01:15.149 STDOUT terraform:  } 2025-05-23 00:01:15.151491 | orchestrator | 00:01:15.149 STDOUT terraform:  # terraform_data.image_node will be created 2025-05-23 00:01:15.151497 | orchestrator | 00:01:15.149 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-05-23 00:01:15.151504 | orchestrator | 00:01:15.150 STDOUT terraform:  + id = (known after apply) 2025-05-23 00:01:15.151511 | orchestrator | 00:01:15.150 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-05-23 00:01:15.151517 | orchestrator | 00:01:15.150 STDOUT terraform:  + output = (known after apply) 2025-05-23 00:01:15.151523 | orchestrator | 00:01:15.150 STDOUT terraform:  } 2025-05-23 00:01:15.151529 | orchestrator | 00:01:15.150 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-05-23 00:01:15.151536 | orchestrator | 00:01:15.150 STDOUT terraform: Changes to Outputs: 2025-05-23 00:01:15.151542 | orchestrator | 00:01:15.150 STDOUT terraform:  + manager_address = (sensitive value) 2025-05-23 00:01:15.151549 | orchestrator | 00:01:15.150 STDOUT terraform:  + private_key = (sensitive value) 2025-05-23 00:01:15.363883 | orchestrator | 00:01:15.363 STDOUT terraform: terraform_data.image: Creating... 2025-05-23 00:01:15.367578 | orchestrator | 00:01:15.367 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=663c63c6-2749-9872-5be6-f44f899493d8] 2025-05-23 00:01:15.372585 | orchestrator | 00:01:15.372 STDOUT terraform: terraform_data.image_node: Creating... 2025-05-23 00:01:15.374103 | orchestrator | 00:01:15.373 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=259665b2-765d-3ad2-e85f-54c41b47d804] 2025-05-23 00:01:15.379759 | orchestrator | 00:01:15.379 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-05-23 00:01:15.380033 | orchestrator | 00:01:15.379 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-05-23 00:01:15.388715 | orchestrator | 00:01:15.388 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-05-23 00:01:15.389178 | orchestrator | 00:01:15.388 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-05-23 00:01:15.389681 | orchestrator | 00:01:15.389 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-05-23 00:01:15.389766 | orchestrator | 00:01:15.389 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-05-23 00:01:15.390994 | orchestrator | 00:01:15.390 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-05-23 00:01:15.391312 | orchestrator | 00:01:15.391 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-05-23 00:01:15.392007 | orchestrator | 00:01:15.391 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-05-23 00:01:15.393447 | orchestrator | 00:01:15.393 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-05-23 00:01:15.862080 | orchestrator | 00:01:15.861 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-05-23 00:01:15.862545 | orchestrator | 00:01:15.862 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-05-23 00:01:15.873536 | orchestrator | 00:01:15.873 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-05-23 00:01:15.873597 | orchestrator | 00:01:15.873 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-05-23 00:01:15.874711 | orchestrator | 00:01:15.874 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-05-23 00:01:15.885926 | orchestrator | 00:01:15.885 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-05-23 00:01:21.486836 | orchestrator | 00:01:21.486 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=95503252-f85c-4ee8-ac0d-41dd2c1c855b] 2025-05-23 00:01:21.497992 | orchestrator | 00:01:21.497 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-05-23 00:01:25.391055 | orchestrator | 00:01:25.390 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-05-23 00:01:25.392198 | orchestrator | 00:01:25.391 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-05-23 00:01:25.392317 | orchestrator | 00:01:25.391 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-05-23 00:01:25.392342 | orchestrator | 00:01:25.392 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-05-23 00:01:25.393232 | orchestrator | 00:01:25.393 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-05-23 00:01:25.393345 | orchestrator | 00:01:25.393 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-05-23 00:01:25.874688 | orchestrator | 00:01:25.874 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-05-23 00:01:25.874827 | orchestrator | 00:01:25.874 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-05-23 00:01:25.886863 | orchestrator | 00:01:25.886 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-05-23 00:01:26.045074 | orchestrator | 00:01:26.044 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 11s [id=2ac02f21-3ef0-4f70-9ec3-b7448efc3652] 2025-05-23 00:01:26.053427 | orchestrator | 00:01:26.053 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-05-23 00:01:26.068378 | orchestrator | 00:01:26.068 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 11s [id=2fc59eae-0e0c-4c3b-84f8-905b4655c6b7] 2025-05-23 00:01:26.076204 | orchestrator | 00:01:26.075 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-05-23 00:01:26.186403 | orchestrator | 00:01:26.185 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 11s [id=18473d69-2fd0-4937-9240-f5fad34c2ed7] 2025-05-23 00:01:26.193495 | orchestrator | 00:01:26.193 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-05-23 00:01:26.220268 | orchestrator | 00:01:26.219 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 11s [id=eb878625-a80c-49f3-a757-e0a303c4dd75] 2025-05-23 00:01:26.228608 | orchestrator | 00:01:26.228 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-05-23 00:01:26.281585 | orchestrator | 00:01:26.281 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 11s [id=329d29a6-e648-44c1-9803-5cc5abc56db6] 2025-05-23 00:01:26.296780 | orchestrator | 00:01:26.296 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-05-23 00:01:26.310582 | orchestrator | 00:01:26.310 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 11s [id=29f848a2-d495-4783-815a-7e69d4da9d2d] 2025-05-23 00:01:26.321133 | orchestrator | 00:01:26.320 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-05-23 00:01:26.327029 | orchestrator | 00:01:26.326 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 10s [id=5f24398e-55ab-4e45-a360-e924ed2b4127] 2025-05-23 00:01:26.343871 | orchestrator | 00:01:26.343 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-05-23 00:01:26.348546 | orchestrator | 00:01:26.348 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 10s [id=3c0d7b27-8ebd-4816-b389-8c3a005395e5] 2025-05-23 00:01:26.348762 | orchestrator | 00:01:26.348 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=cc921a232be4991bfe41877c8968e6286293b72e] 2025-05-23 00:01:26.359373 | orchestrator | 00:01:26.359 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-05-23 00:01:26.361242 | orchestrator | 00:01:26.361 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-05-23 00:01:26.364015 | orchestrator | 00:01:26.363 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 10s [id=252b3cc1-c875-426d-9475-c1c0edf2ac3c] 2025-05-23 00:01:26.366555 | orchestrator | 00:01:26.366 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=d406e2bfe82341f89f34dbe19b6bda4e70406fc8] 2025-05-23 00:01:31.500607 | orchestrator | 00:01:31.500 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-05-23 00:01:31.869002 | orchestrator | 00:01:31.868 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 11s [id=e91133d1-5a4c-4c6b-aae9-a3102c4d2118] 2025-05-23 00:01:32.270902 | orchestrator | 00:01:32.270 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=a7fa34cd-874e-4481-b5bb-f742a367479b] 2025-05-23 00:01:32.277857 | orchestrator | 00:01:32.277 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-05-23 00:01:36.054806 | orchestrator | 00:01:36.054 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-05-23 00:01:36.078242 | orchestrator | 00:01:36.077 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-05-23 00:01:36.194701 | orchestrator | 00:01:36.194 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-05-23 00:01:36.230103 | orchestrator | 00:01:36.229 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-05-23 00:01:36.292457 | orchestrator | 00:01:36.292 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-05-23 00:01:36.322834 | orchestrator | 00:01:36.322 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-05-23 00:01:36.425685 | orchestrator | 00:01:36.425 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 10s [id=80dad4c8-3190-408a-8751-9f09dded29fb] 2025-05-23 00:01:36.604753 | orchestrator | 00:01:36.604 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 11s [id=7c04edb2-2eed-4aa0-a7e5-eb868ae2e4f6] 2025-05-23 00:01:36.643715 | orchestrator | 00:01:36.643 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 11s [id=eb22843a-e40d-4bb3-bbec-4f656ad57efb] 2025-05-23 00:01:36.718472 | orchestrator | 00:01:36.718 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 11s [id=9f50aec4-5d55-4859-ae08-1d6db96ed4b7] 2025-05-23 00:01:36.781367 | orchestrator | 00:01:36.780 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 11s [id=ab21c0a7-19ba-47fa-9bfa-a97fbae45af4] 2025-05-23 00:01:37.171314 | orchestrator | 00:01:37.170 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 11s [id=c8efe3c1-6307-4e01-8bfc-afd4fa6a2572] 2025-05-23 00:01:40.340706 | orchestrator | 00:01:40.340 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 8s [id=6fac29d5-2ac0-45d2-b4b5-0b8ef5b358fb] 2025-05-23 00:01:40.348748 | orchestrator | 00:01:40.348 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-05-23 00:01:40.351149 | orchestrator | 00:01:40.350 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-05-23 00:01:40.353153 | orchestrator | 00:01:40.352 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-05-23 00:01:40.535636 | orchestrator | 00:01:40.535 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=791b42ee-e674-44ba-ae19-558da75fc49d] 2025-05-23 00:01:40.548616 | orchestrator | 00:01:40.548 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-05-23 00:01:40.550848 | orchestrator | 00:01:40.550 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-05-23 00:01:40.552467 | orchestrator | 00:01:40.552 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-05-23 00:01:40.553323 | orchestrator | 00:01:40.553 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-05-23 00:01:40.553451 | orchestrator | 00:01:40.553 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-05-23 00:01:40.559932 | orchestrator | 00:01:40.559 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-05-23 00:01:40.596843 | orchestrator | 00:01:40.596 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=5845c1d0-f103-47fe-9180-497b7e753d81] 2025-05-23 00:01:40.606969 | orchestrator | 00:01:40.605 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-05-23 00:01:40.607043 | orchestrator | 00:01:40.605 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-05-23 00:01:40.607294 | orchestrator | 00:01:40.607 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-05-23 00:01:40.721793 | orchestrator | 00:01:40.721 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=42fd074d-4dd7-4bc3-aafb-fb6a800ed7c1] 2025-05-23 00:01:40.741393 | orchestrator | 00:01:40.741 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-05-23 00:01:40.742649 | orchestrator | 00:01:40.742 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=deb35ea4-7191-4c49-9227-8d9ed049fffd] 2025-05-23 00:01:40.747674 | orchestrator | 00:01:40.747 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-05-23 00:01:40.894614 | orchestrator | 00:01:40.894 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=b9baf993-fb56-4364-9217-52fa70b57fd7] 2025-05-23 00:01:40.910159 | orchestrator | 00:01:40.909 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-05-23 00:01:40.931767 | orchestrator | 00:01:40.931 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=79b174db-4b5d-409c-874c-ce4a935655d6] 2025-05-23 00:01:40.946306 | orchestrator | 00:01:40.946 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-05-23 00:01:41.063975 | orchestrator | 00:01:41.055 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=8ad74aa9-043c-45a1-bb0f-f821c57af753] 2025-05-23 00:01:41.075387 | orchestrator | 00:01:41.075 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-05-23 00:01:41.130901 | orchestrator | 00:01:41.130 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=1d216d29-78e3-4c50-a619-08f79d233d63] 2025-05-23 00:01:41.148217 | orchestrator | 00:01:41.147 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-05-23 00:01:41.248020 | orchestrator | 00:01:41.247 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=139bddc5-b3fb-4b53-a803-f949cc5a4ca2] 2025-05-23 00:01:41.263608 | orchestrator | 00:01:41.263 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-05-23 00:01:41.278817 | orchestrator | 00:01:41.278 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=fb318576-827d-449d-9b69-6edaebfc5e84] 2025-05-23 00:01:41.491578 | orchestrator | 00:01:41.491 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=8edb1208-cc59-4ace-b568-135af4a202fc] 2025-05-23 00:01:46.171692 | orchestrator | 00:01:46.171 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 5s [id=45d15a3c-d08c-4636-ac45-aade6cbb71e5] 2025-05-23 00:01:46.459635 | orchestrator | 00:01:46.459 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 5s [id=46e7d142-2f74-4b01-9518-ed38e7a4d20e] 2025-05-23 00:01:46.623213 | orchestrator | 00:01:46.622 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 6s [id=f381ecb5-9dea-4d17-84e1-c416e48b00d8] 2025-05-23 00:01:46.638008 | orchestrator | 00:01:46.637 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 6s [id=e2673372-05be-4189-8cdf-04a4bf42df29] 2025-05-23 00:01:46.682448 | orchestrator | 00:01:46.682 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 6s [id=69414a6e-6211-40b6-8e89-516eef41eab8] 2025-05-23 00:01:47.061792 | orchestrator | 00:01:47.061 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 6s [id=3d18babb-1686-4a85-b6ba-623579a67cc3] 2025-05-23 00:01:47.146858 | orchestrator | 00:01:47.146 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 6s [id=2854d693-b0b6-4446-9bfa-c4975a5f9a81] 2025-05-23 00:01:47.346116 | orchestrator | 00:01:47.345 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 7s [id=fc164e24-c9ed-4343-8849-5f2067c2bf2d] 2025-05-23 00:01:47.368012 | orchestrator | 00:01:47.367 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-05-23 00:01:47.385023 | orchestrator | 00:01:47.384 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-05-23 00:01:47.386284 | orchestrator | 00:01:47.386 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-05-23 00:01:47.386591 | orchestrator | 00:01:47.386 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-05-23 00:01:47.394329 | orchestrator | 00:01:47.394 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-05-23 00:01:47.399918 | orchestrator | 00:01:47.399 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-05-23 00:01:47.401373 | orchestrator | 00:01:47.401 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-05-23 00:01:53.759547 | orchestrator | 00:01:53.759 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 7s [id=66d56369-a66c-4b89-ace8-c66824bba9bd] 2025-05-23 00:01:53.770146 | orchestrator | 00:01:53.769 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-05-23 00:01:53.775327 | orchestrator | 00:01:53.775 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-05-23 00:01:53.776263 | orchestrator | 00:01:53.776 STDOUT terraform: local_file.inventory: Creating... 2025-05-23 00:01:53.783142 | orchestrator | 00:01:53.782 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=caa19804e123e0f94000a9a9f11348b8a0a16806] 2025-05-23 00:01:53.783750 | orchestrator | 00:01:53.783 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=a76a9a821ce41b2bf8b2cf5f960d9c44c69370fa] 2025-05-23 00:01:54.603264 | orchestrator | 00:01:54.602 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=66d56369-a66c-4b89-ace8-c66824bba9bd] 2025-05-23 00:01:57.385573 | orchestrator | 00:01:57.385 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-05-23 00:01:57.386544 | orchestrator | 00:01:57.386 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-05-23 00:01:57.387815 | orchestrator | 00:01:57.387 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-05-23 00:01:57.399007 | orchestrator | 00:01:57.398 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-05-23 00:01:57.401379 | orchestrator | 00:01:57.401 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-05-23 00:01:57.402513 | orchestrator | 00:01:57.402 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-05-23 00:02:07.386648 | orchestrator | 00:02:07.386 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-05-23 00:02:07.387564 | orchestrator | 00:02:07.387 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-05-23 00:02:07.388584 | orchestrator | 00:02:07.388 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-05-23 00:02:07.400211 | orchestrator | 00:02:07.399 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-05-23 00:02:07.402281 | orchestrator | 00:02:07.402 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-05-23 00:02:07.403362 | orchestrator | 00:02:07.403 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-05-23 00:02:07.816600 | orchestrator | 00:02:07.816 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 21s [id=2a8bc5b1-6081-4ca5-a707-8ab215ce0548] 2025-05-23 00:02:07.880229 | orchestrator | 00:02:07.879 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 21s [id=86d44525-ee04-4494-8ff2-321fdd81e4a7] 2025-05-23 00:02:07.944674 | orchestrator | 00:02:07.944 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 21s [id=6ccc313d-a769-429f-a949-283bec3a9b5a] 2025-05-23 00:02:07.946726 | orchestrator | 00:02:07.946 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 21s [id=437d8370-4f43-464b-8fb1-f456bf4f52ae] 2025-05-23 00:02:08.036590 | orchestrator | 00:02:08.036 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 21s [id=1260ce40-29fa-4feb-9c59-3e722e07744b] 2025-05-23 00:02:17.403777 | orchestrator | 00:02:17.403 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-05-23 00:02:18.860593 | orchestrator | 00:02:18.860 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 32s [id=59dee1b1-2178-4cab-87ac-e8ac69fe067c] 2025-05-23 00:02:18.890291 | orchestrator | 00:02:18.890 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-05-23 00:02:18.893236 | orchestrator | 00:02:18.893 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-05-23 00:02:18.896952 | orchestrator | 00:02:18.896 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-05-23 00:02:18.897798 | orchestrator | 00:02:18.897 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-05-23 00:02:18.898079 | orchestrator | 00:02:18.897 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-05-23 00:02:18.907888 | orchestrator | 00:02:18.907 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-05-23 00:02:18.910674 | orchestrator | 00:02:18.910 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-05-23 00:02:18.917908 | orchestrator | 00:02:18.917 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=4058665089854998880] 2025-05-23 00:02:18.919322 | orchestrator | 00:02:18.918 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-05-23 00:02:18.919351 | orchestrator | 00:02:18.918 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-05-23 00:02:18.919357 | orchestrator | 00:02:18.918 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-05-23 00:02:18.943152 | orchestrator | 00:02:18.942 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-05-23 00:02:24.203774 | orchestrator | 00:02:24.203 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 5s [id=437d8370-4f43-464b-8fb1-f456bf4f52ae/329d29a6-e648-44c1-9803-5cc5abc56db6] 2025-05-23 00:02:24.220391 | orchestrator | 00:02:24.220 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 5s [id=2a8bc5b1-6081-4ca5-a707-8ab215ce0548/252b3cc1-c875-426d-9475-c1c0edf2ac3c] 2025-05-23 00:02:24.241011 | orchestrator | 00:02:24.240 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 5s [id=1260ce40-29fa-4feb-9c59-3e722e07744b/29f848a2-d495-4783-815a-7e69d4da9d2d] 2025-05-23 00:02:24.254138 | orchestrator | 00:02:24.253 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 5s [id=437d8370-4f43-464b-8fb1-f456bf4f52ae/5f24398e-55ab-4e45-a360-e924ed2b4127] 2025-05-23 00:02:24.262169 | orchestrator | 00:02:24.261 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 5s [id=2a8bc5b1-6081-4ca5-a707-8ab215ce0548/eb878625-a80c-49f3-a757-e0a303c4dd75] 2025-05-23 00:02:24.271385 | orchestrator | 00:02:24.271 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 5s [id=1260ce40-29fa-4feb-9c59-3e722e07744b/2ac02f21-3ef0-4f70-9ec3-b7448efc3652] 2025-05-23 00:02:24.289670 | orchestrator | 00:02:24.289 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 5s [id=437d8370-4f43-464b-8fb1-f456bf4f52ae/18473d69-2fd0-4937-9240-f5fad34c2ed7] 2025-05-23 00:02:24.295588 | orchestrator | 00:02:24.295 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 5s [id=2a8bc5b1-6081-4ca5-a707-8ab215ce0548/3c0d7b27-8ebd-4816-b389-8c3a005395e5] 2025-05-23 00:02:24.316230 | orchestrator | 00:02:24.315 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 5s [id=1260ce40-29fa-4feb-9c59-3e722e07744b/2fc59eae-0e0c-4c3b-84f8-905b4655c6b7] 2025-05-23 00:02:28.944575 | orchestrator | 00:02:28.944 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-05-23 00:02:38.945122 | orchestrator | 00:02:38.944 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-05-23 00:02:39.814252 | orchestrator | 00:02:39.813 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=5dfde4a9-f500-4c88-9593-ca73a3cef24a] 2025-05-23 00:02:39.835025 | orchestrator | 00:02:39.834 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-05-23 00:02:39.835134 | orchestrator | 00:02:39.834 STDOUT terraform: Outputs: 2025-05-23 00:02:39.835154 | orchestrator | 00:02:39.834 STDOUT terraform: manager_address = 2025-05-23 00:02:39.835166 | orchestrator | 00:02:39.835 STDOUT terraform: private_key = 2025-05-23 00:02:39.936891 | orchestrator | ok: Runtime: 0:01:35.492092 2025-05-23 00:02:39.972136 | 2025-05-23 00:02:39.972288 | TASK [Fetch manager address] 2025-05-23 00:02:40.441344 | orchestrator | ok 2025-05-23 00:02:40.455184 | 2025-05-23 00:02:40.455385 | TASK [Set manager_host address] 2025-05-23 00:02:40.554093 | orchestrator | ok 2025-05-23 00:02:40.564604 | 2025-05-23 00:02:40.564759 | LOOP [Update ansible collections] 2025-05-23 00:02:41.440643 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-05-23 00:02:41.440973 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-23 00:02:41.441021 | orchestrator | Starting galaxy collection install process 2025-05-23 00:02:41.441047 | orchestrator | Process install dependency map 2025-05-23 00:02:41.441074 | orchestrator | Starting collection install process 2025-05-23 00:02:41.441101 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons' 2025-05-23 00:02:41.441133 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons 2025-05-23 00:02:41.441170 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-05-23 00:02:41.441250 | orchestrator | ok: Item: commons Runtime: 0:00:00.544292 2025-05-23 00:02:42.282097 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-23 00:02:42.282302 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-05-23 00:02:42.282465 | orchestrator | Starting galaxy collection install process 2025-05-23 00:02:42.282509 | orchestrator | Process install dependency map 2025-05-23 00:02:42.282538 | orchestrator | Starting collection install process 2025-05-23 00:02:42.282564 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services' 2025-05-23 00:02:42.282591 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services 2025-05-23 00:02:42.282617 | orchestrator | osism.services:999.0.0 was installed successfully 2025-05-23 00:02:42.282670 | orchestrator | ok: Item: services Runtime: 0:00:00.567687 2025-05-23 00:02:42.301592 | 2025-05-23 00:02:42.301810 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-05-23 00:02:52.910693 | orchestrator | ok 2025-05-23 00:02:52.921919 | 2025-05-23 00:02:52.922071 | TASK [Wait a little longer for the manager so that everything is ready] 2025-05-23 00:03:52.966163 | orchestrator | ok 2025-05-23 00:03:52.981539 | 2025-05-23 00:03:52.981732 | TASK [Fetch manager ssh hostkey] 2025-05-23 00:03:54.589538 | orchestrator | Output suppressed because no_log was given 2025-05-23 00:03:54.606728 | 2025-05-23 00:03:54.606976 | TASK [Get ssh keypair from terraform environment] 2025-05-23 00:03:55.150229 | orchestrator | ok: Runtime: 0:00:00.010750 2025-05-23 00:03:55.167559 | 2025-05-23 00:03:55.167749 | TASK [Point out that the following task takes some time and does not give any output] 2025-05-23 00:03:55.217967 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-05-23 00:03:55.234882 | 2025-05-23 00:03:55.235091 | TASK [Run manager part 0] 2025-05-23 00:03:56.455406 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-23 00:03:56.512162 | orchestrator | 2025-05-23 00:03:56.512222 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-05-23 00:03:56.512233 | orchestrator | 2025-05-23 00:03:56.512251 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-05-23 00:03:58.672835 | orchestrator | ok: [testbed-manager] 2025-05-23 00:03:58.672915 | orchestrator | 2025-05-23 00:03:58.672951 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-05-23 00:03:58.672970 | orchestrator | 2025-05-23 00:03:58.672987 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-23 00:04:00.600263 | orchestrator | ok: [testbed-manager] 2025-05-23 00:04:00.600327 | orchestrator | 2025-05-23 00:04:00.600336 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-05-23 00:04:01.287265 | orchestrator | ok: [testbed-manager] 2025-05-23 00:04:01.287322 | orchestrator | 2025-05-23 00:04:01.287329 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-05-23 00:04:01.345436 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:04:01.345491 | orchestrator | 2025-05-23 00:04:01.345503 | orchestrator | TASK [Update package cache] **************************************************** 2025-05-23 00:04:01.379844 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:04:01.379894 | orchestrator | 2025-05-23 00:04:01.379903 | orchestrator | TASK [Install required packages] *********************************************** 2025-05-23 00:04:01.409531 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:04:01.409577 | orchestrator | 2025-05-23 00:04:01.409584 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-05-23 00:04:01.433426 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:04:01.433498 | orchestrator | 2025-05-23 00:04:01.433512 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-05-23 00:04:01.463722 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:04:01.463744 | orchestrator | 2025-05-23 00:04:01.463751 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-05-23 00:04:01.494210 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:04:01.494240 | orchestrator | 2025-05-23 00:04:01.494248 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-05-23 00:04:01.530309 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:04:01.530358 | orchestrator | 2025-05-23 00:04:01.530367 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-05-23 00:04:02.317829 | orchestrator | changed: [testbed-manager] 2025-05-23 00:04:02.317898 | orchestrator | 2025-05-23 00:04:02.317908 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-05-23 00:06:59.929148 | orchestrator | changed: [testbed-manager] 2025-05-23 00:06:59.931621 | orchestrator | 2025-05-23 00:06:59.931647 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-05-23 00:08:35.829475 | orchestrator | changed: [testbed-manager] 2025-05-23 00:08:35.829525 | orchestrator | 2025-05-23 00:08:35.829534 | orchestrator | TASK [Install required packages] *********************************************** 2025-05-23 00:08:56.047420 | orchestrator | changed: [testbed-manager] 2025-05-23 00:08:56.047468 | orchestrator | 2025-05-23 00:08:56.047478 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-05-23 00:09:04.430277 | orchestrator | changed: [testbed-manager] 2025-05-23 00:09:04.430324 | orchestrator | 2025-05-23 00:09:04.430384 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-05-23 00:09:04.478514 | orchestrator | ok: [testbed-manager] 2025-05-23 00:09:04.478619 | orchestrator | 2025-05-23 00:09:04.478628 | orchestrator | TASK [Get current user] ******************************************************** 2025-05-23 00:09:05.261465 | orchestrator | ok: [testbed-manager] 2025-05-23 00:09:05.261550 | orchestrator | 2025-05-23 00:09:05.261568 | orchestrator | TASK [Create venv directory] *************************************************** 2025-05-23 00:09:05.962672 | orchestrator | changed: [testbed-manager] 2025-05-23 00:09:05.962752 | orchestrator | 2025-05-23 00:09:05.962767 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-05-23 00:09:12.321625 | orchestrator | changed: [testbed-manager] 2025-05-23 00:09:12.321716 | orchestrator | 2025-05-23 00:09:12.321754 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-05-23 00:09:18.056001 | orchestrator | changed: [testbed-manager] 2025-05-23 00:09:18.056093 | orchestrator | 2025-05-23 00:09:18.056114 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-05-23 00:09:20.670155 | orchestrator | changed: [testbed-manager] 2025-05-23 00:09:20.670247 | orchestrator | 2025-05-23 00:09:20.670265 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-05-23 00:09:22.393464 | orchestrator | changed: [testbed-manager] 2025-05-23 00:09:22.393530 | orchestrator | 2025-05-23 00:09:22.393540 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-05-23 00:09:23.492537 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-05-23 00:09:23.492633 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-05-23 00:09:23.492649 | orchestrator | 2025-05-23 00:09:23.492663 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-05-23 00:09:23.534574 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-05-23 00:09:23.534643 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-05-23 00:09:23.534657 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-05-23 00:09:23.534669 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-05-23 00:09:27.258847 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-05-23 00:09:27.258887 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-05-23 00:09:27.258894 | orchestrator | 2025-05-23 00:09:27.258901 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-05-23 00:09:27.832514 | orchestrator | changed: [testbed-manager] 2025-05-23 00:09:27.832604 | orchestrator | 2025-05-23 00:09:27.832620 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-05-23 00:09:48.296762 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-05-23 00:09:48.297386 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-05-23 00:09:48.297414 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-05-23 00:09:48.297427 | orchestrator | 2025-05-23 00:09:48.297440 | orchestrator | TASK [Install local collections] *********************************************** 2025-05-23 00:09:50.558603 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-05-23 00:09:50.558688 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-05-23 00:09:50.558703 | orchestrator | 2025-05-23 00:09:50.558716 | orchestrator | PLAY [Create operator user] **************************************************** 2025-05-23 00:09:50.558729 | orchestrator | 2025-05-23 00:09:50.558740 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-23 00:09:51.943176 | orchestrator | ok: [testbed-manager] 2025-05-23 00:09:51.943237 | orchestrator | 2025-05-23 00:09:51.943251 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-05-23 00:09:51.990623 | orchestrator | ok: [testbed-manager] 2025-05-23 00:09:51.990683 | orchestrator | 2025-05-23 00:09:51.990692 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-05-23 00:09:52.056932 | orchestrator | ok: [testbed-manager] 2025-05-23 00:09:52.056984 | orchestrator | 2025-05-23 00:09:52.056991 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-05-23 00:09:52.793426 | orchestrator | changed: [testbed-manager] 2025-05-23 00:09:52.793500 | orchestrator | 2025-05-23 00:09:52.793513 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-05-23 00:09:53.517595 | orchestrator | changed: [testbed-manager] 2025-05-23 00:09:53.518144 | orchestrator | 2025-05-23 00:09:53.518176 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-05-23 00:09:54.877039 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-05-23 00:09:54.877115 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-05-23 00:09:54.877128 | orchestrator | 2025-05-23 00:09:54.877154 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-05-23 00:09:56.222478 | orchestrator | changed: [testbed-manager] 2025-05-23 00:09:56.222598 | orchestrator | 2025-05-23 00:09:56.222622 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-05-23 00:09:57.945253 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-05-23 00:09:57.945375 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-05-23 00:09:57.945390 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-05-23 00:09:57.945403 | orchestrator | 2025-05-23 00:09:57.945415 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-05-23 00:09:58.519111 | orchestrator | changed: [testbed-manager] 2025-05-23 00:09:58.519198 | orchestrator | 2025-05-23 00:09:58.519216 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-05-23 00:09:58.579907 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:09:58.579977 | orchestrator | 2025-05-23 00:09:58.579992 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-05-23 00:09:59.434249 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-23 00:09:59.434345 | orchestrator | changed: [testbed-manager] 2025-05-23 00:09:59.434361 | orchestrator | 2025-05-23 00:09:59.434373 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-05-23 00:09:59.474313 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:09:59.474360 | orchestrator | 2025-05-23 00:09:59.474368 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-05-23 00:09:59.507914 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:09:59.508008 | orchestrator | 2025-05-23 00:09:59.508026 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-05-23 00:09:59.544984 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:09:59.545067 | orchestrator | 2025-05-23 00:09:59.545083 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-05-23 00:09:59.596066 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:09:59.596156 | orchestrator | 2025-05-23 00:09:59.596172 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-05-23 00:10:00.309741 | orchestrator | ok: [testbed-manager] 2025-05-23 00:10:00.309779 | orchestrator | 2025-05-23 00:10:00.309785 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-05-23 00:10:00.309790 | orchestrator | 2025-05-23 00:10:00.309795 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-23 00:10:01.716365 | orchestrator | ok: [testbed-manager] 2025-05-23 00:10:01.716456 | orchestrator | 2025-05-23 00:10:01.716474 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-05-23 00:10:02.658319 | orchestrator | changed: [testbed-manager] 2025-05-23 00:10:02.658353 | orchestrator | 2025-05-23 00:10:02.658359 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 00:10:02.658366 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-05-23 00:10:02.658370 | orchestrator | 2025-05-23 00:10:03.047381 | orchestrator | ok: Runtime: 0:06:07.219105 2025-05-23 00:10:03.071719 | 2025-05-23 00:10:03.071924 | TASK [Point out that the log in on the manager is now possible] 2025-05-23 00:10:03.119764 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-05-23 00:10:03.130560 | 2025-05-23 00:10:03.130715 | TASK [Point out that the following task takes some time and does not give any output] 2025-05-23 00:10:03.181287 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-05-23 00:10:03.192857 | 2025-05-23 00:10:03.193029 | TASK [Run manager part 1 + 2] 2025-05-23 00:10:04.007055 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-23 00:10:04.065232 | orchestrator | 2025-05-23 00:10:04.065308 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-05-23 00:10:04.065320 | orchestrator | 2025-05-23 00:10:04.065338 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-23 00:10:06.572827 | orchestrator | ok: [testbed-manager] 2025-05-23 00:10:06.572922 | orchestrator | 2025-05-23 00:10:06.572977 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-05-23 00:10:06.605657 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:10:06.605705 | orchestrator | 2025-05-23 00:10:06.605714 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-05-23 00:10:06.645708 | orchestrator | ok: [testbed-manager] 2025-05-23 00:10:06.645760 | orchestrator | 2025-05-23 00:10:06.645767 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-23 00:10:06.675550 | orchestrator | ok: [testbed-manager] 2025-05-23 00:10:06.675642 | orchestrator | 2025-05-23 00:10:06.675670 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-23 00:10:06.748109 | orchestrator | ok: [testbed-manager] 2025-05-23 00:10:06.748194 | orchestrator | 2025-05-23 00:10:06.748211 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-23 00:10:06.818119 | orchestrator | ok: [testbed-manager] 2025-05-23 00:10:06.818167 | orchestrator | 2025-05-23 00:10:06.818174 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-23 00:10:06.863094 | orchestrator | included: /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-05-23 00:10:06.863177 | orchestrator | 2025-05-23 00:10:06.863194 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-23 00:10:07.588996 | orchestrator | ok: [testbed-manager] 2025-05-23 00:10:07.589116 | orchestrator | 2025-05-23 00:10:07.589127 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-23 00:10:07.635769 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:10:07.635818 | orchestrator | 2025-05-23 00:10:07.635827 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-23 00:10:09.023592 | orchestrator | changed: [testbed-manager] 2025-05-23 00:10:09.023652 | orchestrator | 2025-05-23 00:10:09.023662 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-23 00:10:09.595777 | orchestrator | ok: [testbed-manager] 2025-05-23 00:10:09.595833 | orchestrator | 2025-05-23 00:10:09.595843 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-23 00:10:10.764605 | orchestrator | changed: [testbed-manager] 2025-05-23 00:10:10.764658 | orchestrator | 2025-05-23 00:10:10.764670 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-23 00:10:23.758895 | orchestrator | changed: [testbed-manager] 2025-05-23 00:10:23.758992 | orchestrator | 2025-05-23 00:10:23.759008 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-05-23 00:10:24.457417 | orchestrator | ok: [testbed-manager] 2025-05-23 00:10:24.457505 | orchestrator | 2025-05-23 00:10:24.457524 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-05-23 00:10:24.519314 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:10:24.519389 | orchestrator | 2025-05-23 00:10:24.519402 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-05-23 00:10:25.522566 | orchestrator | changed: [testbed-manager] 2025-05-23 00:10:25.522654 | orchestrator | 2025-05-23 00:10:25.522670 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-05-23 00:10:26.506419 | orchestrator | changed: [testbed-manager] 2025-05-23 00:10:26.506535 | orchestrator | 2025-05-23 00:10:26.506552 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-05-23 00:10:27.115634 | orchestrator | changed: [testbed-manager] 2025-05-23 00:10:27.115733 | orchestrator | 2025-05-23 00:10:27.115749 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-05-23 00:10:27.163699 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-05-23 00:10:27.163796 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-05-23 00:10:27.163810 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-05-23 00:10:27.163823 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-05-23 00:10:29.757607 | orchestrator | changed: [testbed-manager] 2025-05-23 00:10:29.757684 | orchestrator | 2025-05-23 00:10:29.757696 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-05-23 00:10:38.499752 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-05-23 00:10:38.499839 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-05-23 00:10:38.499854 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-05-23 00:10:38.499864 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-05-23 00:10:38.499880 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-05-23 00:10:38.499889 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-05-23 00:10:38.499899 | orchestrator | 2025-05-23 00:10:38.499909 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-05-23 00:10:39.593508 | orchestrator | changed: [testbed-manager] 2025-05-23 00:10:39.643090 | orchestrator | 2025-05-23 00:10:39.643153 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-05-23 00:10:39.643382 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:10:39.643437 | orchestrator | 2025-05-23 00:10:39.643448 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-05-23 00:10:42.871011 | orchestrator | changed: [testbed-manager] 2025-05-23 00:10:42.871057 | orchestrator | 2025-05-23 00:10:42.871067 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-05-23 00:10:42.912513 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:10:42.912554 | orchestrator | 2025-05-23 00:10:42.912562 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-05-23 00:12:21.508970 | orchestrator | changed: [testbed-manager] 2025-05-23 00:12:21.509171 | orchestrator | 2025-05-23 00:12:21.509196 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-23 00:12:22.608917 | orchestrator | ok: [testbed-manager] 2025-05-23 00:12:22.608973 | orchestrator | 2025-05-23 00:12:22.608986 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 00:12:22.608998 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-05-23 00:12:22.609009 | orchestrator | 2025-05-23 00:12:22.819819 | orchestrator | ok: Runtime: 0:02:19.228832 2025-05-23 00:12:22.836052 | 2025-05-23 00:12:22.836263 | TASK [Reboot manager] 2025-05-23 00:12:24.374441 | orchestrator | ok: Runtime: 0:00:00.941018 2025-05-23 00:12:24.390004 | 2025-05-23 00:12:24.390152 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-05-23 00:12:40.795829 | orchestrator | ok 2025-05-23 00:12:40.804641 | 2025-05-23 00:12:40.804777 | TASK [Wait a little longer for the manager so that everything is ready] 2025-05-23 00:13:40.867184 | orchestrator | ok 2025-05-23 00:13:40.876162 | 2025-05-23 00:13:40.876289 | TASK [Deploy manager + bootstrap nodes] 2025-05-23 00:13:43.447598 | orchestrator | 2025-05-23 00:13:43.447794 | orchestrator | # DEPLOY MANAGER 2025-05-23 00:13:43.447817 | orchestrator | 2025-05-23 00:13:43.447831 | orchestrator | + set -e 2025-05-23 00:13:43.447844 | orchestrator | + echo 2025-05-23 00:13:43.447858 | orchestrator | + echo '# DEPLOY MANAGER' 2025-05-23 00:13:43.447876 | orchestrator | + echo 2025-05-23 00:13:43.447926 | orchestrator | + cat /opt/manager-vars.sh 2025-05-23 00:13:43.451135 | orchestrator | export NUMBER_OF_NODES=6 2025-05-23 00:13:43.451163 | orchestrator | 2025-05-23 00:13:43.451177 | orchestrator | export CEPH_VERSION=reef 2025-05-23 00:13:43.451190 | orchestrator | export CONFIGURATION_VERSION=main 2025-05-23 00:13:43.451202 | orchestrator | export MANAGER_VERSION=8.1.0 2025-05-23 00:13:43.451225 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-05-23 00:13:43.451236 | orchestrator | 2025-05-23 00:13:43.451254 | orchestrator | export ARA=false 2025-05-23 00:13:43.451265 | orchestrator | export TEMPEST=false 2025-05-23 00:13:43.451283 | orchestrator | export IS_ZUUL=true 2025-05-23 00:13:43.451294 | orchestrator | 2025-05-23 00:13:43.451312 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.13 2025-05-23 00:13:43.451324 | orchestrator | export EXTERNAL_API=false 2025-05-23 00:13:43.451335 | orchestrator | 2025-05-23 00:13:43.451356 | orchestrator | export IMAGE_USER=ubuntu 2025-05-23 00:13:43.451367 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-05-23 00:13:43.451378 | orchestrator | 2025-05-23 00:13:43.451391 | orchestrator | export CEPH_STACK=ceph-ansible 2025-05-23 00:13:43.451408 | orchestrator | 2025-05-23 00:13:43.451419 | orchestrator | + echo 2025-05-23 00:13:43.451430 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-23 00:13:43.452487 | orchestrator | ++ export INTERACTIVE=false 2025-05-23 00:13:43.452505 | orchestrator | ++ INTERACTIVE=false 2025-05-23 00:13:43.452522 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-23 00:13:43.452534 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-23 00:13:43.452729 | orchestrator | + source /opt/manager-vars.sh 2025-05-23 00:13:43.452746 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-23 00:13:43.452757 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-23 00:13:43.452864 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-23 00:13:43.452880 | orchestrator | ++ CEPH_VERSION=reef 2025-05-23 00:13:43.452891 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-23 00:13:43.452903 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-23 00:13:43.452914 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-05-23 00:13:43.452925 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-05-23 00:13:43.452936 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-23 00:13:43.452947 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-23 00:13:43.452961 | orchestrator | ++ export ARA=false 2025-05-23 00:13:43.452973 | orchestrator | ++ ARA=false 2025-05-23 00:13:43.452992 | orchestrator | ++ export TEMPEST=false 2025-05-23 00:13:43.453003 | orchestrator | ++ TEMPEST=false 2025-05-23 00:13:43.453017 | orchestrator | ++ export IS_ZUUL=true 2025-05-23 00:13:43.453029 | orchestrator | ++ IS_ZUUL=true 2025-05-23 00:13:43.453040 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.13 2025-05-23 00:13:43.453051 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.13 2025-05-23 00:13:43.453063 | orchestrator | ++ export EXTERNAL_API=false 2025-05-23 00:13:43.453073 | orchestrator | ++ EXTERNAL_API=false 2025-05-23 00:13:43.453103 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-23 00:13:43.453114 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-23 00:13:43.453125 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-23 00:13:43.453136 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-23 00:13:43.453152 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-23 00:13:43.453163 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-23 00:13:43.453174 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-05-23 00:13:43.513489 | orchestrator | + docker version 2025-05-23 00:13:43.775978 | orchestrator | Client: Docker Engine - Community 2025-05-23 00:13:43.776109 | orchestrator | Version: 26.1.4 2025-05-23 00:13:43.776140 | orchestrator | API version: 1.45 2025-05-23 00:13:43.776159 | orchestrator | Go version: go1.21.11 2025-05-23 00:13:43.776175 | orchestrator | Git commit: 5650f9b 2025-05-23 00:13:43.776186 | orchestrator | Built: Wed Jun 5 11:28:57 2024 2025-05-23 00:13:43.776198 | orchestrator | OS/Arch: linux/amd64 2025-05-23 00:13:43.776210 | orchestrator | Context: default 2025-05-23 00:13:43.776221 | orchestrator | 2025-05-23 00:13:43.776232 | orchestrator | Server: Docker Engine - Community 2025-05-23 00:13:43.776243 | orchestrator | Engine: 2025-05-23 00:13:43.776254 | orchestrator | Version: 26.1.4 2025-05-23 00:13:43.776265 | orchestrator | API version: 1.45 (minimum version 1.24) 2025-05-23 00:13:43.776276 | orchestrator | Go version: go1.21.11 2025-05-23 00:13:43.776287 | orchestrator | Git commit: de5c9cf 2025-05-23 00:13:43.776328 | orchestrator | Built: Wed Jun 5 11:28:57 2024 2025-05-23 00:13:43.776340 | orchestrator | OS/Arch: linux/amd64 2025-05-23 00:13:43.776351 | orchestrator | Experimental: false 2025-05-23 00:13:43.776362 | orchestrator | containerd: 2025-05-23 00:13:43.776372 | orchestrator | Version: 1.7.27 2025-05-23 00:13:43.776383 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-05-23 00:13:43.776394 | orchestrator | runc: 2025-05-23 00:13:43.776405 | orchestrator | Version: 1.2.5 2025-05-23 00:13:43.776416 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-05-23 00:13:43.776427 | orchestrator | docker-init: 2025-05-23 00:13:43.776438 | orchestrator | Version: 0.19.0 2025-05-23 00:13:43.776449 | orchestrator | GitCommit: de40ad0 2025-05-23 00:13:43.778711 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-05-23 00:13:43.785869 | orchestrator | + set -e 2025-05-23 00:13:43.785923 | orchestrator | + source /opt/manager-vars.sh 2025-05-23 00:13:43.785935 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-23 00:13:43.785946 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-23 00:13:43.785957 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-23 00:13:43.785968 | orchestrator | ++ CEPH_VERSION=reef 2025-05-23 00:13:43.785979 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-23 00:13:43.785991 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-23 00:13:43.786002 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-05-23 00:13:43.786013 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-05-23 00:13:43.786058 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-23 00:13:43.786070 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-23 00:13:43.786111 | orchestrator | ++ export ARA=false 2025-05-23 00:13:43.786123 | orchestrator | ++ ARA=false 2025-05-23 00:13:43.786134 | orchestrator | ++ export TEMPEST=false 2025-05-23 00:13:43.786144 | orchestrator | ++ TEMPEST=false 2025-05-23 00:13:43.786155 | orchestrator | ++ export IS_ZUUL=true 2025-05-23 00:13:43.786166 | orchestrator | ++ IS_ZUUL=true 2025-05-23 00:13:43.786177 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.13 2025-05-23 00:13:43.786188 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.13 2025-05-23 00:13:43.786199 | orchestrator | ++ export EXTERNAL_API=false 2025-05-23 00:13:43.786210 | orchestrator | ++ EXTERNAL_API=false 2025-05-23 00:13:43.786230 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-23 00:13:43.786241 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-23 00:13:43.786252 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-23 00:13:43.786262 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-23 00:13:43.786273 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-23 00:13:43.786284 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-23 00:13:43.786295 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-23 00:13:43.786306 | orchestrator | ++ export INTERACTIVE=false 2025-05-23 00:13:43.786316 | orchestrator | ++ INTERACTIVE=false 2025-05-23 00:13:43.786327 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-23 00:13:43.786338 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-23 00:13:43.786349 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-23 00:13:43.786360 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 8.1.0 2025-05-23 00:13:43.793425 | orchestrator | + set -e 2025-05-23 00:13:43.793475 | orchestrator | + VERSION=8.1.0 2025-05-23 00:13:43.793495 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 8.1.0/g' /opt/configuration/environments/manager/configuration.yml 2025-05-23 00:13:43.801825 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-23 00:13:43.801864 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2025-05-23 00:13:43.805927 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2025-05-23 00:13:43.809481 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2025-05-23 00:13:43.816632 | orchestrator | /opt/configuration ~ 2025-05-23 00:13:43.816688 | orchestrator | + set -e 2025-05-23 00:13:43.816702 | orchestrator | + pushd /opt/configuration 2025-05-23 00:13:43.816713 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-23 00:13:43.818105 | orchestrator | + source /opt/venv/bin/activate 2025-05-23 00:13:43.819385 | orchestrator | ++ deactivate nondestructive 2025-05-23 00:13:43.819407 | orchestrator | ++ '[' -n '' ']' 2025-05-23 00:13:43.819420 | orchestrator | ++ '[' -n '' ']' 2025-05-23 00:13:43.819432 | orchestrator | ++ hash -r 2025-05-23 00:13:43.819443 | orchestrator | ++ '[' -n '' ']' 2025-05-23 00:13:43.819454 | orchestrator | ++ unset VIRTUAL_ENV 2025-05-23 00:13:43.819465 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-05-23 00:13:43.819476 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-05-23 00:13:43.819488 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-05-23 00:13:43.819520 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-05-23 00:13:43.819539 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-05-23 00:13:43.819550 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-05-23 00:13:43.819561 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-23 00:13:43.819573 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-23 00:13:43.819584 | orchestrator | ++ export PATH 2025-05-23 00:13:43.819594 | orchestrator | ++ '[' -n '' ']' 2025-05-23 00:13:43.819605 | orchestrator | ++ '[' -z '' ']' 2025-05-23 00:13:43.819616 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-05-23 00:13:43.819626 | orchestrator | ++ PS1='(venv) ' 2025-05-23 00:13:43.819637 | orchestrator | ++ export PS1 2025-05-23 00:13:43.819648 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-05-23 00:13:43.819659 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-05-23 00:13:43.819673 | orchestrator | ++ hash -r 2025-05-23 00:13:43.819697 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2025-05-23 00:13:44.882454 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2025-05-23 00:13:44.883049 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.3) 2025-05-23 00:13:44.884386 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2025-05-23 00:13:44.885578 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.2) 2025-05-23 00:13:44.886797 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (25.0) 2025-05-23 00:13:44.896921 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.2.1) 2025-05-23 00:13:44.898286 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2025-05-23 00:13:44.899346 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.19) 2025-05-23 00:13:44.900859 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2025-05-23 00:13:44.930831 | orchestrator | Requirement already satisfied: charset-normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.2) 2025-05-23 00:13:44.932301 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.10) 2025-05-23 00:13:44.933814 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.4.0) 2025-05-23 00:13:44.935258 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2025.4.26) 2025-05-23 00:13:44.939244 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.2) 2025-05-23 00:13:45.139387 | orchestrator | ++ which gilt 2025-05-23 00:13:45.141501 | orchestrator | + GILT=/opt/venv/bin/gilt 2025-05-23 00:13:45.141527 | orchestrator | + /opt/venv/bin/gilt overlay 2025-05-23 00:13:45.357930 | orchestrator | osism.cfg-generics: 2025-05-23 00:13:45.358004 | orchestrator | - cloning osism.cfg-generics to /home/dragon/.gilt/clone/github.com/osism.cfg-generics 2025-05-23 00:13:46.897004 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2025-05-23 00:13:46.897156 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2025-05-23 00:13:46.897212 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2025-05-23 00:13:46.897238 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2025-05-23 00:13:47.844208 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2025-05-23 00:13:47.851323 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2025-05-23 00:13:48.151089 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2025-05-23 00:13:48.201857 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-23 00:13:48.201957 | orchestrator | + deactivate 2025-05-23 00:13:48.201973 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-05-23 00:13:48.201986 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-23 00:13:48.201997 | orchestrator | + export PATH 2025-05-23 00:13:48.202008 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-05-23 00:13:48.202189 | orchestrator | + '[' -n '' ']' 2025-05-23 00:13:48.202203 | orchestrator | + hash -r 2025-05-23 00:13:48.202214 | orchestrator | + '[' -n '' ']' 2025-05-23 00:13:48.202225 | orchestrator | + unset VIRTUAL_ENV 2025-05-23 00:13:48.202236 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-05-23 00:13:48.202248 | orchestrator | ~ 2025-05-23 00:13:48.202259 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-05-23 00:13:48.202271 | orchestrator | + unset -f deactivate 2025-05-23 00:13:48.202282 | orchestrator | + popd 2025-05-23 00:13:48.204119 | orchestrator | + [[ 8.1.0 == \l\a\t\e\s\t ]] 2025-05-23 00:13:48.204164 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-05-23 00:13:48.204187 | orchestrator | ++ semver 8.1.0 7.0.0 2025-05-23 00:13:48.253372 | orchestrator | + [[ 1 -ge 0 ]] 2025-05-23 00:13:48.253462 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-05-23 00:13:48.253479 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-05-23 00:13:48.293183 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-23 00:13:48.293273 | orchestrator | + source /opt/venv/bin/activate 2025-05-23 00:13:48.293298 | orchestrator | ++ deactivate nondestructive 2025-05-23 00:13:48.293311 | orchestrator | ++ '[' -n '' ']' 2025-05-23 00:13:48.293323 | orchestrator | ++ '[' -n '' ']' 2025-05-23 00:13:48.293334 | orchestrator | ++ hash -r 2025-05-23 00:13:48.293345 | orchestrator | ++ '[' -n '' ']' 2025-05-23 00:13:48.293356 | orchestrator | ++ unset VIRTUAL_ENV 2025-05-23 00:13:48.293770 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-05-23 00:13:48.293787 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-05-23 00:13:48.293801 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-05-23 00:13:48.293812 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-05-23 00:13:48.293822 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-05-23 00:13:48.293834 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-05-23 00:13:48.293851 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-23 00:13:48.293864 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-23 00:13:48.293875 | orchestrator | ++ export PATH 2025-05-23 00:13:48.293886 | orchestrator | ++ '[' -n '' ']' 2025-05-23 00:13:48.293897 | orchestrator | ++ '[' -z '' ']' 2025-05-23 00:13:48.293912 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-05-23 00:13:48.293923 | orchestrator | ++ PS1='(venv) ' 2025-05-23 00:13:48.293934 | orchestrator | ++ export PS1 2025-05-23 00:13:48.293945 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-05-23 00:13:48.293955 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-05-23 00:13:48.293970 | orchestrator | ++ hash -r 2025-05-23 00:13:48.294198 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-05-23 00:13:49.867619 | orchestrator | 2025-05-23 00:13:49.867749 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-05-23 00:13:49.867769 | orchestrator | 2025-05-23 00:13:49.867782 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-23 00:13:50.402234 | orchestrator | ok: [testbed-manager] 2025-05-23 00:13:50.402329 | orchestrator | 2025-05-23 00:13:50.402345 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-05-23 00:13:51.349537 | orchestrator | changed: [testbed-manager] 2025-05-23 00:13:51.349640 | orchestrator | 2025-05-23 00:13:51.349657 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-05-23 00:13:51.349670 | orchestrator | 2025-05-23 00:13:51.349682 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-23 00:13:53.658646 | orchestrator | ok: [testbed-manager] 2025-05-23 00:13:53.658781 | orchestrator | 2025-05-23 00:13:53.658801 | orchestrator | TASK [Pull images] ************************************************************* 2025-05-23 00:13:58.432339 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ara-server:1.7.2) 2025-05-23 00:13:58.432447 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/mariadb:11.6.2) 2025-05-23 00:13:58.432464 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ceph-ansible:8.1.0) 2025-05-23 00:13:58.432476 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/inventory-reconciler:8.1.0) 2025-05-23 00:13:58.432487 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/kolla-ansible:8.1.0) 2025-05-23 00:13:58.432503 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/redis:7.4.1-alpine) 2025-05-23 00:13:58.432515 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/netbox:v4.1.7) 2025-05-23 00:13:58.432528 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism-ansible:8.1.0) 2025-05-23 00:13:58.432539 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism:0.20241219.2) 2025-05-23 00:13:58.432549 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/postgres:16.6-alpine) 2025-05-23 00:13:58.432561 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/traefik:v3.2.1) 2025-05-23 00:13:58.432572 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/hashicorp/vault:1.18.2) 2025-05-23 00:13:58.432583 | orchestrator | 2025-05-23 00:13:58.432595 | orchestrator | TASK [Check status] ************************************************************ 2025-05-23 00:15:15.023880 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-23 00:15:15.024002 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-05-23 00:15:15.024055 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (118 retries left). 2025-05-23 00:15:15.024077 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (117 retries left). 2025-05-23 00:15:15.024110 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j944380855728.1584', 'results_file': '/home/dragon/.ansible_async/j944380855728.1584', 'changed': True, 'item': 'registry.osism.tech/osism/ara-server:1.7.2', 'ansible_loop_var': 'item'}) 2025-05-23 00:15:15.024132 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j990781669977.1609', 'results_file': '/home/dragon/.ansible_async/j990781669977.1609', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/mariadb:11.6.2', 'ansible_loop_var': 'item'}) 2025-05-23 00:15:15.024148 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-23 00:15:15.024159 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-05-23 00:15:15.024170 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j351121220861.1634', 'results_file': '/home/dragon/.ansible_async/j351121220861.1634', 'changed': True, 'item': 'registry.osism.tech/osism/ceph-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-23 00:15:15.024182 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j794733955180.1666', 'results_file': '/home/dragon/.ansible_async/j794733955180.1666', 'changed': True, 'item': 'registry.osism.tech/osism/inventory-reconciler:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-23 00:15:15.024193 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-23 00:15:15.024204 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j627913213149.1698', 'results_file': '/home/dragon/.ansible_async/j627913213149.1698', 'changed': True, 'item': 'registry.osism.tech/osism/kolla-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-23 00:15:15.024216 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j796401660784.1730', 'results_file': '/home/dragon/.ansible_async/j796401660784.1730', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/redis:7.4.1-alpine', 'ansible_loop_var': 'item'}) 2025-05-23 00:15:15.024256 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j482100673989.1762', 'results_file': '/home/dragon/.ansible_async/j482100673989.1762', 'changed': True, 'item': 'registry.osism.tech/osism/netbox:v4.1.7', 'ansible_loop_var': 'item'}) 2025-05-23 00:15:15.024269 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j100370877983.1795', 'results_file': '/home/dragon/.ansible_async/j100370877983.1795', 'changed': True, 'item': 'registry.osism.tech/osism/osism-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-23 00:15:15.024280 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j471045006571.1835', 'results_file': '/home/dragon/.ansible_async/j471045006571.1835', 'changed': True, 'item': 'registry.osism.tech/osism/osism:0.20241219.2', 'ansible_loop_var': 'item'}) 2025-05-23 00:15:15.024291 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j21237528045.1861', 'results_file': '/home/dragon/.ansible_async/j21237528045.1861', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/postgres:16.6-alpine', 'ansible_loop_var': 'item'}) 2025-05-23 00:15:15.024302 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j598236698493.1893', 'results_file': '/home/dragon/.ansible_async/j598236698493.1893', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/traefik:v3.2.1', 'ansible_loop_var': 'item'}) 2025-05-23 00:15:15.024313 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j869767905362.1927', 'results_file': '/home/dragon/.ansible_async/j869767905362.1927', 'changed': True, 'item': 'registry.osism.tech/dockerhub/hashicorp/vault:1.18.2', 'ansible_loop_var': 'item'}) 2025-05-23 00:15:15.024324 | orchestrator | 2025-05-23 00:15:15.024336 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-05-23 00:15:15.076923 | orchestrator | ok: [testbed-manager] 2025-05-23 00:15:15.077073 | orchestrator | 2025-05-23 00:15:15.077093 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-05-23 00:15:15.530580 | orchestrator | changed: [testbed-manager] 2025-05-23 00:15:15.530680 | orchestrator | 2025-05-23 00:15:15.530696 | orchestrator | TASK [Add netbox_postgres_volume_type parameter] ******************************* 2025-05-23 00:15:15.894415 | orchestrator | changed: [testbed-manager] 2025-05-23 00:15:15.894512 | orchestrator | 2025-05-23 00:15:15.894527 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-05-23 00:15:16.274239 | orchestrator | changed: [testbed-manager] 2025-05-23 00:15:16.274320 | orchestrator | 2025-05-23 00:15:16.274330 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-05-23 00:15:16.327568 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:15:16.327651 | orchestrator | 2025-05-23 00:15:16.327662 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-05-23 00:15:16.657872 | orchestrator | ok: [testbed-manager] 2025-05-23 00:15:16.657967 | orchestrator | 2025-05-23 00:15:16.657982 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-05-23 00:15:16.789217 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:15:16.789313 | orchestrator | 2025-05-23 00:15:16.789328 | orchestrator | PLAY [Apply role traefik & netbox] ********************************************* 2025-05-23 00:15:16.789339 | orchestrator | 2025-05-23 00:15:16.789348 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-23 00:15:18.674931 | orchestrator | ok: [testbed-manager] 2025-05-23 00:15:18.675126 | orchestrator | 2025-05-23 00:15:18.675145 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-05-23 00:15:18.782399 | orchestrator | included: osism.services.traefik for testbed-manager 2025-05-23 00:15:18.782496 | orchestrator | 2025-05-23 00:15:18.782511 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-05-23 00:15:18.841840 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-05-23 00:15:18.841978 | orchestrator | 2025-05-23 00:15:18.841989 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-05-23 00:15:19.968163 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-05-23 00:15:19.968272 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-05-23 00:15:19.968288 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-05-23 00:15:19.968301 | orchestrator | 2025-05-23 00:15:19.968313 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-05-23 00:15:21.766168 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-05-23 00:15:21.766278 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-05-23 00:15:21.766292 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-05-23 00:15:21.766304 | orchestrator | 2025-05-23 00:15:21.766315 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-05-23 00:15:22.398385 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-23 00:15:22.398489 | orchestrator | changed: [testbed-manager] 2025-05-23 00:15:22.398506 | orchestrator | 2025-05-23 00:15:22.398543 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-05-23 00:15:23.062930 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-23 00:15:23.063083 | orchestrator | changed: [testbed-manager] 2025-05-23 00:15:23.063103 | orchestrator | 2025-05-23 00:15:23.063116 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-05-23 00:15:23.124681 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:15:23.124767 | orchestrator | 2025-05-23 00:15:23.124781 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-05-23 00:15:23.475908 | orchestrator | ok: [testbed-manager] 2025-05-23 00:15:23.476059 | orchestrator | 2025-05-23 00:15:23.476077 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-05-23 00:15:23.538175 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-05-23 00:15:23.538264 | orchestrator | 2025-05-23 00:15:23.538279 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-05-23 00:15:24.530416 | orchestrator | changed: [testbed-manager] 2025-05-23 00:15:24.530526 | orchestrator | 2025-05-23 00:15:24.530542 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-05-23 00:15:25.337538 | orchestrator | changed: [testbed-manager] 2025-05-23 00:15:25.337664 | orchestrator | 2025-05-23 00:15:25.337681 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-05-23 00:15:28.918104 | orchestrator | changed: [testbed-manager] 2025-05-23 00:15:28.918217 | orchestrator | 2025-05-23 00:15:28.918237 | orchestrator | TASK [Apply netbox role] ******************************************************* 2025-05-23 00:15:29.041502 | orchestrator | included: osism.services.netbox for testbed-manager 2025-05-23 00:15:29.041598 | orchestrator | 2025-05-23 00:15:29.041613 | orchestrator | TASK [osism.services.netbox : Include install tasks] *************************** 2025-05-23 00:15:29.112846 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/install-Debian-family.yml for testbed-manager 2025-05-23 00:15:29.112950 | orchestrator | 2025-05-23 00:15:29.112967 | orchestrator | TASK [osism.services.netbox : Install required packages] *********************** 2025-05-23 00:15:31.895571 | orchestrator | ok: [testbed-manager] 2025-05-23 00:15:31.895685 | orchestrator | 2025-05-23 00:15:31.895703 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-05-23 00:15:32.025346 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config.yml for testbed-manager 2025-05-23 00:15:32.025443 | orchestrator | 2025-05-23 00:15:32.025459 | orchestrator | TASK [osism.services.netbox : Create required directories] ********************* 2025-05-23 00:15:33.167262 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox) 2025-05-23 00:15:33.167373 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration) 2025-05-23 00:15:33.167389 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/secrets) 2025-05-23 00:15:33.167466 | orchestrator | 2025-05-23 00:15:33.167480 | orchestrator | TASK [osism.services.netbox : Include postgres config tasks] ******************* 2025-05-23 00:15:33.244554 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-postgres.yml for testbed-manager 2025-05-23 00:15:33.244645 | orchestrator | 2025-05-23 00:15:33.244660 | orchestrator | TASK [osism.services.netbox : Copy postgres environment files] ***************** 2025-05-23 00:15:33.935517 | orchestrator | changed: [testbed-manager] => (item=postgres) 2025-05-23 00:15:33.935616 | orchestrator | 2025-05-23 00:15:33.935630 | orchestrator | TASK [osism.services.netbox : Copy postgres configuration file] **************** 2025-05-23 00:15:34.615041 | orchestrator | changed: [testbed-manager] 2025-05-23 00:15:34.615152 | orchestrator | 2025-05-23 00:15:34.615170 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-05-23 00:15:35.262147 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-23 00:15:35.262252 | orchestrator | changed: [testbed-manager] 2025-05-23 00:15:35.262271 | orchestrator | 2025-05-23 00:15:35.262285 | orchestrator | TASK [osism.services.netbox : Create docker-entrypoint-initdb.d directory] ***** 2025-05-23 00:15:35.665344 | orchestrator | changed: [testbed-manager] 2025-05-23 00:15:35.665475 | orchestrator | 2025-05-23 00:15:35.665507 | orchestrator | TASK [osism.services.netbox : Check if init.sql file exists] ******************* 2025-05-23 00:15:36.014510 | orchestrator | ok: [testbed-manager] 2025-05-23 00:15:36.014630 | orchestrator | 2025-05-23 00:15:36.014658 | orchestrator | TASK [osism.services.netbox : Copy init.sql file] ****************************** 2025-05-23 00:15:36.066185 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:15:36.066269 | orchestrator | 2025-05-23 00:15:36.066280 | orchestrator | TASK [osism.services.netbox : Create init-netbox-database.sh script] *********** 2025-05-23 00:15:36.727342 | orchestrator | changed: [testbed-manager] 2025-05-23 00:15:36.727444 | orchestrator | 2025-05-23 00:15:36.727460 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-05-23 00:15:36.792175 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-netbox.yml for testbed-manager 2025-05-23 00:15:36.792235 | orchestrator | 2025-05-23 00:15:36.792249 | orchestrator | TASK [osism.services.netbox : Create directories required by netbox] *********** 2025-05-23 00:15:37.537396 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/initializers) 2025-05-23 00:15:37.537504 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/startup-scripts) 2025-05-23 00:15:37.537520 | orchestrator | 2025-05-23 00:15:37.537533 | orchestrator | TASK [osism.services.netbox : Copy netbox environment files] ******************* 2025-05-23 00:15:38.157413 | orchestrator | changed: [testbed-manager] => (item=netbox) 2025-05-23 00:15:38.157513 | orchestrator | 2025-05-23 00:15:38.157528 | orchestrator | TASK [osism.services.netbox : Copy netbox configuration file] ****************** 2025-05-23 00:15:38.766285 | orchestrator | changed: [testbed-manager] 2025-05-23 00:15:38.766389 | orchestrator | 2025-05-23 00:15:38.766406 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (<= 1.26)] **** 2025-05-23 00:15:38.818597 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:15:38.818708 | orchestrator | 2025-05-23 00:15:38.818725 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (> 1.26)] ***** 2025-05-23 00:15:39.440428 | orchestrator | changed: [testbed-manager] 2025-05-23 00:15:39.440528 | orchestrator | 2025-05-23 00:15:39.440545 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-05-23 00:15:41.213570 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-23 00:15:41.213674 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-23 00:15:41.213689 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-23 00:15:41.213702 | orchestrator | changed: [testbed-manager] 2025-05-23 00:15:41.213715 | orchestrator | 2025-05-23 00:15:41.213727 | orchestrator | TASK [osism.services.netbox : Deploy initializers for netbox] ****************** 2025-05-23 00:15:47.324295 | orchestrator | changed: [testbed-manager] => (item=custom_fields) 2025-05-23 00:15:47.324413 | orchestrator | changed: [testbed-manager] => (item=device_roles) 2025-05-23 00:15:47.324430 | orchestrator | changed: [testbed-manager] => (item=device_types) 2025-05-23 00:15:47.324442 | orchestrator | changed: [testbed-manager] => (item=groups) 2025-05-23 00:15:47.324478 | orchestrator | changed: [testbed-manager] => (item=manufacturers) 2025-05-23 00:15:47.324490 | orchestrator | changed: [testbed-manager] => (item=object_permissions) 2025-05-23 00:15:47.324501 | orchestrator | changed: [testbed-manager] => (item=prefix_vlan_roles) 2025-05-23 00:15:47.324529 | orchestrator | changed: [testbed-manager] => (item=sites) 2025-05-23 00:15:47.324542 | orchestrator | changed: [testbed-manager] => (item=tags) 2025-05-23 00:15:47.324554 | orchestrator | changed: [testbed-manager] => (item=users) 2025-05-23 00:15:47.324564 | orchestrator | 2025-05-23 00:15:47.324576 | orchestrator | TASK [osism.services.netbox : Deploy startup scripts for netbox] *************** 2025-05-23 00:15:47.947342 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/files/startup-scripts/270_tags.py) 2025-05-23 00:15:47.947443 | orchestrator | 2025-05-23 00:15:47.947458 | orchestrator | TASK [osism.services.netbox : Include service tasks] *************************** 2025-05-23 00:15:48.028300 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/service.yml for testbed-manager 2025-05-23 00:15:48.028382 | orchestrator | 2025-05-23 00:15:48.028396 | orchestrator | TASK [osism.services.netbox : Copy netbox systemd unit file] ******************* 2025-05-23 00:15:48.732846 | orchestrator | changed: [testbed-manager] 2025-05-23 00:15:48.732947 | orchestrator | 2025-05-23 00:15:48.732965 | orchestrator | TASK [osism.services.netbox : Create traefik external network] ***************** 2025-05-23 00:15:49.376825 | orchestrator | ok: [testbed-manager] 2025-05-23 00:15:49.376922 | orchestrator | 2025-05-23 00:15:49.376938 | orchestrator | TASK [osism.services.netbox : Copy docker-compose.yml file] ******************** 2025-05-23 00:15:50.076083 | orchestrator | changed: [testbed-manager] 2025-05-23 00:15:50.076184 | orchestrator | 2025-05-23 00:15:50.076200 | orchestrator | TASK [osism.services.netbox : Pull container images] *************************** 2025-05-23 00:15:52.448148 | orchestrator | ok: [testbed-manager] 2025-05-23 00:15:52.448274 | orchestrator | 2025-05-23 00:15:52.448303 | orchestrator | TASK [osism.services.netbox : Stop and disable old service docker-compose@netbox] *** 2025-05-23 00:15:53.381822 | orchestrator | ok: [testbed-manager] 2025-05-23 00:15:53.381934 | orchestrator | 2025-05-23 00:15:53.381953 | orchestrator | TASK [osism.services.netbox : Manage netbox service] *************************** 2025-05-23 00:16:15.566145 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage netbox service (10 retries left). 2025-05-23 00:16:15.566264 | orchestrator | ok: [testbed-manager] 2025-05-23 00:16:15.566283 | orchestrator | 2025-05-23 00:16:15.566295 | orchestrator | TASK [osism.services.netbox : Register that netbox service was started] ******** 2025-05-23 00:16:15.614819 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:16:15.614860 | orchestrator | 2025-05-23 00:16:15.614872 | orchestrator | TASK [osism.services.netbox : Flush handlers] ********************************** 2025-05-23 00:16:15.614884 | orchestrator | 2025-05-23 00:16:15.614895 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-05-23 00:16:15.646675 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:16:15.646708 | orchestrator | 2025-05-23 00:16:15.646721 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-05-23 00:16:15.996401 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/restart-service.yml for testbed-manager 2025-05-23 00:16:15.996495 | orchestrator | 2025-05-23 00:16:15.996510 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres container] ****** 2025-05-23 00:16:16.802811 | orchestrator | ok: [testbed-manager] 2025-05-23 00:16:16.802914 | orchestrator | 2025-05-23 00:16:16.802929 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres container version fact] *** 2025-05-23 00:16:16.881335 | orchestrator | ok: [testbed-manager] 2025-05-23 00:16:16.881435 | orchestrator | 2025-05-23 00:16:16.881450 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres container] *** 2025-05-23 00:16:16.939617 | orchestrator | ok: [testbed-manager] => { 2025-05-23 00:16:16.939701 | orchestrator | "msg": "The major version of the running postgres container is 16" 2025-05-23 00:16:16.939716 | orchestrator | } 2025-05-23 00:16:16.939729 | orchestrator | 2025-05-23 00:16:16.939740 | orchestrator | RUNNING HANDLER [osism.services.netbox : Pull postgres image] ****************** 2025-05-23 00:16:17.564305 | orchestrator | ok: [testbed-manager] 2025-05-23 00:16:17.564439 | orchestrator | 2025-05-23 00:16:17.564455 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres image] ********** 2025-05-23 00:16:18.451461 | orchestrator | ok: [testbed-manager] 2025-05-23 00:16:18.451563 | orchestrator | 2025-05-23 00:16:18.451578 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres image version fact] ****** 2025-05-23 00:16:18.522534 | orchestrator | ok: [testbed-manager] 2025-05-23 00:16:18.522616 | orchestrator | 2025-05-23 00:16:18.522632 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres image] *** 2025-05-23 00:16:18.568584 | orchestrator | ok: [testbed-manager] => { 2025-05-23 00:16:18.568675 | orchestrator | "msg": "The major version of the postgres image is 16" 2025-05-23 00:16:18.568689 | orchestrator | } 2025-05-23 00:16:18.568700 | orchestrator | 2025-05-23 00:16:18.568711 | orchestrator | RUNNING HANDLER [osism.services.netbox : Stop netbox service] ****************** 2025-05-23 00:16:18.627668 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:16:18.627732 | orchestrator | 2025-05-23 00:16:18.627749 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to stop] ****** 2025-05-23 00:16:18.687255 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:16:18.687332 | orchestrator | 2025-05-23 00:16:18.687345 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres volume] ********* 2025-05-23 00:16:18.744049 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:16:18.744102 | orchestrator | 2025-05-23 00:16:18.744115 | orchestrator | RUNNING HANDLER [osism.services.netbox : Upgrade postgres database] ************ 2025-05-23 00:16:18.805055 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:16:18.805117 | orchestrator | 2025-05-23 00:16:18.805131 | orchestrator | RUNNING HANDLER [osism.services.netbox : Remove netbox-pgautoupgrade container] *** 2025-05-23 00:16:18.857836 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:16:18.857923 | orchestrator | 2025-05-23 00:16:18.857940 | orchestrator | RUNNING HANDLER [osism.services.netbox : Start netbox service] ***************** 2025-05-23 00:16:18.946780 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:16:18.946877 | orchestrator | 2025-05-23 00:16:18.946899 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-05-23 00:16:20.029493 | orchestrator | changed: [testbed-manager] 2025-05-23 00:16:20.029599 | orchestrator | 2025-05-23 00:16:20.029616 | orchestrator | RUNNING HANDLER [osism.services.netbox : Register that netbox service was started] *** 2025-05-23 00:16:20.093742 | orchestrator | ok: [testbed-manager] 2025-05-23 00:16:20.093843 | orchestrator | 2025-05-23 00:16:20.093862 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to start] ***** 2025-05-23 00:17:20.150493 | orchestrator | Pausing for 60 seconds 2025-05-23 00:17:20.150613 | orchestrator | changed: [testbed-manager] 2025-05-23 00:17:20.150629 | orchestrator | 2025-05-23 00:17:20.150643 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for an healthy netbox service] *** 2025-05-23 00:17:20.213423 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/wait-for-healthy-service.yml for testbed-manager 2025-05-23 00:17:20.213510 | orchestrator | 2025-05-23 00:17:20.213524 | orchestrator | RUNNING HANDLER [osism.services.netbox : Check that all containers are in a good state] *** 2025-05-23 00:21:00.140186 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (60 retries left). 2025-05-23 00:21:00.140296 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (59 retries left). 2025-05-23 00:21:00.140310 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (58 retries left). 2025-05-23 00:21:00.140321 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (57 retries left). 2025-05-23 00:21:00.140331 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (56 retries left). 2025-05-23 00:21:00.140341 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (55 retries left). 2025-05-23 00:21:00.140350 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (54 retries left). 2025-05-23 00:21:00.140360 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (53 retries left). 2025-05-23 00:21:00.140370 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (52 retries left). 2025-05-23 00:21:00.140404 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (51 retries left). 2025-05-23 00:21:00.140414 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (50 retries left). 2025-05-23 00:21:00.140424 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (49 retries left). 2025-05-23 00:21:00.140434 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (48 retries left). 2025-05-23 00:21:00.140444 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (47 retries left). 2025-05-23 00:21:00.140453 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (46 retries left). 2025-05-23 00:21:00.140465 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (45 retries left). 2025-05-23 00:21:00.140522 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (44 retries left). 2025-05-23 00:21:00.140532 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (43 retries left). 2025-05-23 00:21:00.140541 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (42 retries left). 2025-05-23 00:21:00.140551 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (41 retries left). 2025-05-23 00:21:00.140561 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (40 retries left). 2025-05-23 00:21:00.140571 | orchestrator | changed: [testbed-manager] 2025-05-23 00:21:00.140583 | orchestrator | 2025-05-23 00:21:00.140593 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-05-23 00:21:00.140603 | orchestrator | 2025-05-23 00:21:00.140613 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-23 00:21:02.211761 | orchestrator | ok: [testbed-manager] 2025-05-23 00:21:02.211899 | orchestrator | 2025-05-23 00:21:02.211916 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-05-23 00:21:02.320907 | orchestrator | included: osism.services.manager for testbed-manager 2025-05-23 00:21:02.321001 | orchestrator | 2025-05-23 00:21:02.321015 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-05-23 00:21:02.389701 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-05-23 00:21:02.389836 | orchestrator | 2025-05-23 00:21:02.389854 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-05-23 00:21:04.300153 | orchestrator | ok: [testbed-manager] 2025-05-23 00:21:04.300461 | orchestrator | 2025-05-23 00:21:04.300498 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-05-23 00:21:04.359482 | orchestrator | ok: [testbed-manager] 2025-05-23 00:21:04.359578 | orchestrator | 2025-05-23 00:21:04.359594 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-05-23 00:21:04.457200 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-05-23 00:21:04.457294 | orchestrator | 2025-05-23 00:21:04.457308 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-05-23 00:21:07.328681 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-05-23 00:21:07.328854 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-05-23 00:21:07.328873 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-05-23 00:21:07.328886 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-05-23 00:21:07.328898 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-05-23 00:21:07.328909 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-05-23 00:21:07.328920 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-05-23 00:21:07.328931 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-05-23 00:21:07.328942 | orchestrator | 2025-05-23 00:21:07.328981 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-05-23 00:21:07.960432 | orchestrator | changed: [testbed-manager] 2025-05-23 00:21:07.960540 | orchestrator | 2025-05-23 00:21:07.960558 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-05-23 00:21:08.061289 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-05-23 00:21:08.061384 | orchestrator | 2025-05-23 00:21:08.061398 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-05-23 00:21:09.274495 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-05-23 00:21:09.274595 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-05-23 00:21:09.274610 | orchestrator | 2025-05-23 00:21:09.274622 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-05-23 00:21:09.913438 | orchestrator | changed: [testbed-manager] 2025-05-23 00:21:09.913542 | orchestrator | 2025-05-23 00:21:09.913558 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-05-23 00:21:09.985466 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:21:09.985543 | orchestrator | 2025-05-23 00:21:09.985557 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-05-23 00:21:10.056585 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-05-23 00:21:10.056663 | orchestrator | 2025-05-23 00:21:10.056678 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-05-23 00:21:11.475161 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-23 00:21:11.475264 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-23 00:21:11.475280 | orchestrator | changed: [testbed-manager] 2025-05-23 00:21:11.475293 | orchestrator | 2025-05-23 00:21:11.475305 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-05-23 00:21:12.138655 | orchestrator | changed: [testbed-manager] 2025-05-23 00:21:12.138771 | orchestrator | 2025-05-23 00:21:12.138889 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-05-23 00:21:12.242000 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-netbox.yml for testbed-manager 2025-05-23 00:21:12.242138 | orchestrator | 2025-05-23 00:21:12.242153 | orchestrator | TASK [osism.services.manager : Copy secret files] ****************************** 2025-05-23 00:21:13.543446 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-23 00:21:13.543554 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-23 00:21:13.543570 | orchestrator | changed: [testbed-manager] 2025-05-23 00:21:13.543583 | orchestrator | 2025-05-23 00:21:13.543595 | orchestrator | TASK [osism.services.manager : Copy netbox environment file] ******************* 2025-05-23 00:21:14.176182 | orchestrator | changed: [testbed-manager] 2025-05-23 00:21:14.176286 | orchestrator | 2025-05-23 00:21:14.176301 | orchestrator | TASK [osism.services.manager : Copy inventory-reconciler environment file] ***** 2025-05-23 00:21:14.868101 | orchestrator | changed: [testbed-manager] 2025-05-23 00:21:14.868200 | orchestrator | 2025-05-23 00:21:14.868214 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-05-23 00:21:14.984313 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-05-23 00:21:14.984408 | orchestrator | 2025-05-23 00:21:14.984422 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-05-23 00:21:15.595677 | orchestrator | changed: [testbed-manager] 2025-05-23 00:21:15.595842 | orchestrator | 2025-05-23 00:21:15.595862 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-05-23 00:21:16.015026 | orchestrator | changed: [testbed-manager] 2025-05-23 00:21:16.015113 | orchestrator | 2025-05-23 00:21:16.015122 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-05-23 00:21:17.482346 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-05-23 00:21:17.482480 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-05-23 00:21:17.482496 | orchestrator | 2025-05-23 00:21:17.482510 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-05-23 00:21:18.148163 | orchestrator | changed: [testbed-manager] 2025-05-23 00:21:18.148283 | orchestrator | 2025-05-23 00:21:18.148302 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-05-23 00:21:18.571728 | orchestrator | ok: [testbed-manager] 2025-05-23 00:21:18.571910 | orchestrator | 2025-05-23 00:21:18.571928 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-05-23 00:21:18.948121 | orchestrator | changed: [testbed-manager] 2025-05-23 00:21:18.948222 | orchestrator | 2025-05-23 00:21:18.948238 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-05-23 00:21:19.001470 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:21:19.001536 | orchestrator | 2025-05-23 00:21:19.001550 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-05-23 00:21:19.092166 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-05-23 00:21:19.092257 | orchestrator | 2025-05-23 00:21:19.092271 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-05-23 00:21:19.141725 | orchestrator | ok: [testbed-manager] 2025-05-23 00:21:19.141778 | orchestrator | 2025-05-23 00:21:19.141828 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-05-23 00:21:21.220445 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-05-23 00:21:21.220586 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-05-23 00:21:21.220604 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-05-23 00:21:21.220616 | orchestrator | 2025-05-23 00:21:21.220628 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-05-23 00:21:21.930267 | orchestrator | changed: [testbed-manager] 2025-05-23 00:21:21.930367 | orchestrator | 2025-05-23 00:21:21.930382 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-05-23 00:21:22.681024 | orchestrator | changed: [testbed-manager] 2025-05-23 00:21:22.681127 | orchestrator | 2025-05-23 00:21:22.681144 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-05-23 00:21:23.418353 | orchestrator | changed: [testbed-manager] 2025-05-23 00:21:23.419285 | orchestrator | 2025-05-23 00:21:23.419326 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-05-23 00:21:23.490725 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-05-23 00:21:23.490837 | orchestrator | 2025-05-23 00:21:23.490851 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-05-23 00:21:23.539133 | orchestrator | ok: [testbed-manager] 2025-05-23 00:21:23.539222 | orchestrator | 2025-05-23 00:21:23.539240 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-05-23 00:21:24.306608 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-05-23 00:21:24.306718 | orchestrator | 2025-05-23 00:21:24.306738 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-05-23 00:21:24.429515 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-05-23 00:21:24.429622 | orchestrator | 2025-05-23 00:21:24.429639 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-05-23 00:21:25.224324 | orchestrator | changed: [testbed-manager] 2025-05-23 00:21:25.224425 | orchestrator | 2025-05-23 00:21:25.224440 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-05-23 00:21:25.833705 | orchestrator | ok: [testbed-manager] 2025-05-23 00:21:25.833883 | orchestrator | 2025-05-23 00:21:25.833902 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-05-23 00:21:25.873404 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:21:25.873483 | orchestrator | 2025-05-23 00:21:25.873497 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-05-23 00:21:25.920193 | orchestrator | ok: [testbed-manager] 2025-05-23 00:21:25.920270 | orchestrator | 2025-05-23 00:21:25.920283 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-05-23 00:21:26.783034 | orchestrator | changed: [testbed-manager] 2025-05-23 00:21:26.783171 | orchestrator | 2025-05-23 00:21:26.783207 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-05-23 00:22:09.517171 | orchestrator | changed: [testbed-manager] 2025-05-23 00:22:09.517290 | orchestrator | 2025-05-23 00:22:09.517307 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-05-23 00:22:10.222230 | orchestrator | ok: [testbed-manager] 2025-05-23 00:22:10.222329 | orchestrator | 2025-05-23 00:22:10.222345 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-05-23 00:22:13.121066 | orchestrator | changed: [testbed-manager] 2025-05-23 00:22:13.121202 | orchestrator | 2025-05-23 00:22:13.121221 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-05-23 00:22:13.183115 | orchestrator | ok: [testbed-manager] 2025-05-23 00:22:13.183220 | orchestrator | 2025-05-23 00:22:13.183235 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-05-23 00:22:13.183248 | orchestrator | 2025-05-23 00:22:13.183259 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-05-23 00:22:13.229229 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:22:13.229323 | orchestrator | 2025-05-23 00:22:13.229337 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-05-23 00:23:13.292410 | orchestrator | Pausing for 60 seconds 2025-05-23 00:23:13.292563 | orchestrator | changed: [testbed-manager] 2025-05-23 00:23:13.292585 | orchestrator | 2025-05-23 00:23:13.292598 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-05-23 00:23:18.847613 | orchestrator | changed: [testbed-manager] 2025-05-23 00:23:18.847791 | orchestrator | 2025-05-23 00:23:18.847812 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-05-23 00:24:00.537929 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-05-23 00:24:00.538113 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-05-23 00:24:00.538131 | orchestrator | changed: [testbed-manager] 2025-05-23 00:24:00.538145 | orchestrator | 2025-05-23 00:24:00.538157 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-05-23 00:24:06.170646 | orchestrator | changed: [testbed-manager] 2025-05-23 00:24:06.170823 | orchestrator | 2025-05-23 00:24:06.170841 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-05-23 00:24:06.251597 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-05-23 00:24:06.251737 | orchestrator | 2025-05-23 00:24:06.251754 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-05-23 00:24:06.251767 | orchestrator | 2025-05-23 00:24:06.251779 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-05-23 00:24:06.308518 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:24:06.308606 | orchestrator | 2025-05-23 00:24:06.308617 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 00:24:06.308629 | orchestrator | testbed-manager : ok=110 changed=58 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 2025-05-23 00:24:06.308639 | orchestrator | 2025-05-23 00:24:06.420751 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-23 00:24:06.420849 | orchestrator | + deactivate 2025-05-23 00:24:06.420865 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-05-23 00:24:06.420879 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-23 00:24:06.420890 | orchestrator | + export PATH 2025-05-23 00:24:06.420901 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-05-23 00:24:06.420913 | orchestrator | + '[' -n '' ']' 2025-05-23 00:24:06.420925 | orchestrator | + hash -r 2025-05-23 00:24:06.420936 | orchestrator | + '[' -n '' ']' 2025-05-23 00:24:06.420947 | orchestrator | + unset VIRTUAL_ENV 2025-05-23 00:24:06.420958 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-05-23 00:24:06.420970 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-05-23 00:24:06.420980 | orchestrator | + unset -f deactivate 2025-05-23 00:24:06.420993 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-05-23 00:24:06.428376 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-05-23 00:24:06.428423 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-05-23 00:24:06.428435 | orchestrator | + local max_attempts=60 2025-05-23 00:24:06.428447 | orchestrator | + local name=ceph-ansible 2025-05-23 00:24:06.428458 | orchestrator | + local attempt_num=1 2025-05-23 00:24:06.429475 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-05-23 00:24:06.455355 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-23 00:24:06.455442 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-05-23 00:24:06.455457 | orchestrator | + local max_attempts=60 2025-05-23 00:24:06.455470 | orchestrator | + local name=kolla-ansible 2025-05-23 00:24:06.455481 | orchestrator | + local attempt_num=1 2025-05-23 00:24:06.456097 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-05-23 00:24:06.492398 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-23 00:24:06.492452 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-05-23 00:24:06.492464 | orchestrator | + local max_attempts=60 2025-05-23 00:24:06.492476 | orchestrator | + local name=osism-ansible 2025-05-23 00:24:06.492487 | orchestrator | + local attempt_num=1 2025-05-23 00:24:06.493316 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-05-23 00:24:06.528043 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-23 00:24:06.528128 | orchestrator | + [[ true == \t\r\u\e ]] 2025-05-23 00:24:06.528142 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-05-23 00:24:07.231220 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-05-23 00:24:07.451862 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-05-23 00:24:07.451985 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:8.1.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-05-23 00:24:07.452004 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:8.1.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-05-23 00:24:07.452016 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-05-23 00:24:07.452029 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-05-23 00:24:07.452061 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" beat About a minute ago Up About a minute (healthy) 2025-05-23 00:24:07.452078 | orchestrator | manager-conductor-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" conductor About a minute ago Up About a minute (healthy) 2025-05-23 00:24:07.452089 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" flower About a minute ago Up About a minute (healthy) 2025-05-23 00:24:07.452100 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:8.1.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 48 seconds (healthy) 2025-05-23 00:24:07.452111 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" listener About a minute ago Up About a minute (healthy) 2025-05-23 00:24:07.452122 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.6.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-05-23 00:24:07.452133 | orchestrator | manager-netbox-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" netbox About a minute ago Up About a minute (healthy) 2025-05-23 00:24:07.452169 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" openstack About a minute ago Up About a minute (healthy) 2025-05-23 00:24:07.452180 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.1-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-05-23 00:24:07.452191 | orchestrator | manager-watchdog-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" watchdog About a minute ago Up About a minute (healthy) 2025-05-23 00:24:07.452202 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:8.1.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-05-23 00:24:07.452213 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:8.1.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-05-23 00:24:07.452224 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- sl…" osismclient About a minute ago Up About a minute (healthy) 2025-05-23 00:24:07.456907 | orchestrator | + docker compose --project-directory /opt/netbox ps 2025-05-23 00:24:07.581747 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-05-23 00:24:07.581847 | orchestrator | netbox-netbox-1 registry.osism.tech/osism/netbox:v4.1.7 "/usr/bin/tini -- /o…" netbox 8 minutes ago Up 7 minutes (healthy) 2025-05-23 00:24:07.581862 | orchestrator | netbox-netbox-worker-1 registry.osism.tech/osism/netbox:v4.1.7 "/opt/netbox/venv/bi…" netbox-worker 8 minutes ago Up 3 minutes (healthy) 2025-05-23 00:24:07.581875 | orchestrator | netbox-postgres-1 registry.osism.tech/dockerhub/library/postgres:16.6-alpine "docker-entrypoint.s…" postgres 8 minutes ago Up 7 minutes (healthy) 5432/tcp 2025-05-23 00:24:07.581888 | orchestrator | netbox-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.1-alpine "docker-entrypoint.s…" redis 8 minutes ago Up 7 minutes (healthy) 6379/tcp 2025-05-23 00:24:07.587169 | orchestrator | ++ semver 8.1.0 7.0.0 2025-05-23 00:24:07.637534 | orchestrator | + [[ 1 -ge 0 ]] 2025-05-23 00:24:07.637621 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-05-23 00:24:07.642553 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-05-23 00:24:09.194441 | orchestrator | 2025-05-23 00:24:09 | INFO  | Task e78c2279-706b-4547-a7b8-4b80a55610bf (resolvconf) was prepared for execution. 2025-05-23 00:24:09.194538 | orchestrator | 2025-05-23 00:24:09 | INFO  | It takes a moment until task e78c2279-706b-4547-a7b8-4b80a55610bf (resolvconf) has been started and output is visible here. 2025-05-23 00:24:12.082337 | orchestrator | 2025-05-23 00:24:12.082447 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-05-23 00:24:12.082809 | orchestrator | 2025-05-23 00:24:12.083737 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-23 00:24:12.084996 | orchestrator | Friday 23 May 2025 00:24:12 +0000 (0:00:00.084) 0:00:00.085 ************ 2025-05-23 00:24:15.980975 | orchestrator | ok: [testbed-manager] 2025-05-23 00:24:15.981083 | orchestrator | 2025-05-23 00:24:15.981765 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-05-23 00:24:15.982392 | orchestrator | Friday 23 May 2025 00:24:15 +0000 (0:00:03.901) 0:00:03.986 ************ 2025-05-23 00:24:16.037408 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:24:16.038083 | orchestrator | 2025-05-23 00:24:16.039553 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-05-23 00:24:16.040565 | orchestrator | Friday 23 May 2025 00:24:16 +0000 (0:00:00.058) 0:00:04.044 ************ 2025-05-23 00:24:16.141095 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-05-23 00:24:16.149000 | orchestrator | 2025-05-23 00:24:16.149816 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-05-23 00:24:16.150259 | orchestrator | Friday 23 May 2025 00:24:16 +0000 (0:00:00.102) 0:00:04.147 ************ 2025-05-23 00:24:16.221017 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-05-23 00:24:16.221106 | orchestrator | 2025-05-23 00:24:16.221121 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-05-23 00:24:16.221367 | orchestrator | Friday 23 May 2025 00:24:16 +0000 (0:00:00.081) 0:00:04.229 ************ 2025-05-23 00:24:17.218291 | orchestrator | ok: [testbed-manager] 2025-05-23 00:24:17.218405 | orchestrator | 2025-05-23 00:24:17.218422 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-05-23 00:24:17.218496 | orchestrator | Friday 23 May 2025 00:24:17 +0000 (0:00:00.995) 0:00:05.224 ************ 2025-05-23 00:24:17.274075 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:24:17.274157 | orchestrator | 2025-05-23 00:24:17.274171 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-05-23 00:24:17.274312 | orchestrator | Friday 23 May 2025 00:24:17 +0000 (0:00:00.056) 0:00:05.281 ************ 2025-05-23 00:24:17.745807 | orchestrator | ok: [testbed-manager] 2025-05-23 00:24:17.746276 | orchestrator | 2025-05-23 00:24:17.747534 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-05-23 00:24:17.748205 | orchestrator | Friday 23 May 2025 00:24:17 +0000 (0:00:00.471) 0:00:05.753 ************ 2025-05-23 00:24:17.821024 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:24:17.821964 | orchestrator | 2025-05-23 00:24:17.823033 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-05-23 00:24:17.823937 | orchestrator | Friday 23 May 2025 00:24:17 +0000 (0:00:00.075) 0:00:05.828 ************ 2025-05-23 00:24:18.372464 | orchestrator | changed: [testbed-manager] 2025-05-23 00:24:18.372900 | orchestrator | 2025-05-23 00:24:18.373695 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-05-23 00:24:18.374329 | orchestrator | Friday 23 May 2025 00:24:18 +0000 (0:00:00.546) 0:00:06.374 ************ 2025-05-23 00:24:19.424874 | orchestrator | changed: [testbed-manager] 2025-05-23 00:24:19.424977 | orchestrator | 2025-05-23 00:24:19.426166 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-05-23 00:24:19.426343 | orchestrator | Friday 23 May 2025 00:24:19 +0000 (0:00:01.055) 0:00:07.430 ************ 2025-05-23 00:24:20.367015 | orchestrator | ok: [testbed-manager] 2025-05-23 00:24:20.367899 | orchestrator | 2025-05-23 00:24:20.368720 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-05-23 00:24:20.369686 | orchestrator | Friday 23 May 2025 00:24:20 +0000 (0:00:00.942) 0:00:08.373 ************ 2025-05-23 00:24:20.455732 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-05-23 00:24:20.455893 | orchestrator | 2025-05-23 00:24:20.455984 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-05-23 00:24:20.457984 | orchestrator | Friday 23 May 2025 00:24:20 +0000 (0:00:00.089) 0:00:08.462 ************ 2025-05-23 00:24:21.536831 | orchestrator | changed: [testbed-manager] 2025-05-23 00:24:21.537547 | orchestrator | 2025-05-23 00:24:21.537735 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 00:24:21.538592 | orchestrator | 2025-05-23 00:24:21 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-23 00:24:21.538744 | orchestrator | 2025-05-23 00:24:21 | INFO  | Please wait and do not abort execution. 2025-05-23 00:24:21.539784 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-23 00:24:21.540544 | orchestrator | 2025-05-23 00:24:21.541207 | orchestrator | Friday 23 May 2025 00:24:21 +0000 (0:00:01.080) 0:00:09.543 ************ 2025-05-23 00:24:21.541941 | orchestrator | =============================================================================== 2025-05-23 00:24:21.542605 | orchestrator | Gathering Facts --------------------------------------------------------- 3.90s 2025-05-23 00:24:21.543099 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.08s 2025-05-23 00:24:21.543433 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.06s 2025-05-23 00:24:21.543970 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.00s 2025-05-23 00:24:21.544910 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.94s 2025-05-23 00:24:21.544951 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.55s 2025-05-23 00:24:21.545399 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.47s 2025-05-23 00:24:21.546509 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.10s 2025-05-23 00:24:21.546761 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2025-05-23 00:24:21.547063 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2025-05-23 00:24:21.547344 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-05-23 00:24:21.548145 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2025-05-23 00:24:21.548374 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-05-23 00:24:21.880727 | orchestrator | + osism apply sshconfig 2025-05-23 00:24:23.250988 | orchestrator | 2025-05-23 00:24:23 | INFO  | Task 6a2d2136-acaa-4ea1-9459-e511b43986fd (sshconfig) was prepared for execution. 2025-05-23 00:24:23.251090 | orchestrator | 2025-05-23 00:24:23 | INFO  | It takes a moment until task 6a2d2136-acaa-4ea1-9459-e511b43986fd (sshconfig) has been started and output is visible here. 2025-05-23 00:24:26.033104 | orchestrator | 2025-05-23 00:24:26.033194 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-05-23 00:24:26.034897 | orchestrator | 2025-05-23 00:24:26.035754 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-05-23 00:24:26.035803 | orchestrator | Friday 23 May 2025 00:24:26 +0000 (0:00:00.076) 0:00:00.076 ************ 2025-05-23 00:24:26.525195 | orchestrator | ok: [testbed-manager] 2025-05-23 00:24:26.525562 | orchestrator | 2025-05-23 00:24:26.526440 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-05-23 00:24:26.527154 | orchestrator | Friday 23 May 2025 00:24:26 +0000 (0:00:00.490) 0:00:00.567 ************ 2025-05-23 00:24:26.954097 | orchestrator | changed: [testbed-manager] 2025-05-23 00:24:26.954283 | orchestrator | 2025-05-23 00:24:26.955230 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-05-23 00:24:26.955492 | orchestrator | Friday 23 May 2025 00:24:26 +0000 (0:00:00.431) 0:00:00.998 ************ 2025-05-23 00:24:31.918369 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-05-23 00:24:31.918480 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-05-23 00:24:31.919171 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-05-23 00:24:31.919913 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-05-23 00:24:31.920884 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-05-23 00:24:31.921368 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-05-23 00:24:31.922552 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-05-23 00:24:31.923144 | orchestrator | 2025-05-23 00:24:31.923601 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-05-23 00:24:31.924061 | orchestrator | Friday 23 May 2025 00:24:31 +0000 (0:00:04.963) 0:00:05.961 ************ 2025-05-23 00:24:31.979294 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:24:31.979494 | orchestrator | 2025-05-23 00:24:31.980373 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-05-23 00:24:31.980721 | orchestrator | Friday 23 May 2025 00:24:31 +0000 (0:00:00.062) 0:00:06.023 ************ 2025-05-23 00:24:32.520228 | orchestrator | changed: [testbed-manager] 2025-05-23 00:24:32.521108 | orchestrator | 2025-05-23 00:24:32.522252 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 00:24:32.522588 | orchestrator | 2025-05-23 00:24:32 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-23 00:24:32.522737 | orchestrator | 2025-05-23 00:24:32 | INFO  | Please wait and do not abort execution. 2025-05-23 00:24:32.523685 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-23 00:24:32.524574 | orchestrator | 2025-05-23 00:24:32.525396 | orchestrator | Friday 23 May 2025 00:24:32 +0000 (0:00:00.541) 0:00:06.564 ************ 2025-05-23 00:24:32.526013 | orchestrator | =============================================================================== 2025-05-23 00:24:32.526726 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 4.96s 2025-05-23 00:24:32.527237 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.54s 2025-05-23 00:24:32.527810 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.49s 2025-05-23 00:24:32.528291 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.43s 2025-05-23 00:24:32.528781 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.06s 2025-05-23 00:24:32.863668 | orchestrator | + osism apply known-hosts 2025-05-23 00:24:34.225251 | orchestrator | 2025-05-23 00:24:34 | INFO  | Task ae8cc1ae-799d-40d5-84f1-6216005ffb76 (known-hosts) was prepared for execution. 2025-05-23 00:24:34.225371 | orchestrator | 2025-05-23 00:24:34 | INFO  | It takes a moment until task ae8cc1ae-799d-40d5-84f1-6216005ffb76 (known-hosts) has been started and output is visible here. 2025-05-23 00:24:37.116405 | orchestrator | 2025-05-23 00:24:37.116512 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-05-23 00:24:37.118847 | orchestrator | 2025-05-23 00:24:37.119581 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-05-23 00:24:37.120252 | orchestrator | Friday 23 May 2025 00:24:37 +0000 (0:00:00.102) 0:00:00.102 ************ 2025-05-23 00:24:42.964359 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-05-23 00:24:42.965084 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-05-23 00:24:42.966823 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-05-23 00:24:42.967896 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-05-23 00:24:42.968301 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-05-23 00:24:42.968983 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-05-23 00:24:42.969521 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-05-23 00:24:42.970483 | orchestrator | 2025-05-23 00:24:42.971173 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-05-23 00:24:42.971651 | orchestrator | Friday 23 May 2025 00:24:42 +0000 (0:00:05.849) 0:00:05.951 ************ 2025-05-23 00:24:43.126084 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-05-23 00:24:43.126385 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-05-23 00:24:43.126827 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-05-23 00:24:43.127798 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-05-23 00:24:43.128776 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-05-23 00:24:43.128970 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-05-23 00:24:43.129453 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-05-23 00:24:43.129995 | orchestrator | 2025-05-23 00:24:43.130424 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-23 00:24:43.130782 | orchestrator | Friday 23 May 2025 00:24:43 +0000 (0:00:00.161) 0:00:06.112 ************ 2025-05-23 00:24:44.293263 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFYL4enls7MgtWzLEqqgUsoidRWrEWqK9PJCLJp9wl5F) 2025-05-23 00:24:44.293490 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwJCfAhne0v6fc3i4bJRYesH1pmYHafuxJQdTm5YMo/r0EzSQFjfhVxoNW1pCKAyHexOfN7IIdytQ+G9O6J5t6gQmyODWT3kMJfW9ZgzKDsjRhgw6bPW//bfX5/5OnJQFFU5vL0s0HNcDcXg9S9zC+c7YSFaBTZfr97MKgnsE1bLkR63pDGC3LybKvj/0Q/KJxpIfKcRVGWdAfyJx2pPX9Pki7ziOq/6a8zQnV9fAKXzHLramm16KOM/xZk01RsHRzBrvCN9h4h1j2TGcZphyyN7Fs+ktGCd1n6EH2VVq6UpXTllvLnZ532aJagt50bEJ7492iHPLBcra6d5jDSMc1XxheCDn7lWUn6RWsCSyQOk6SdQpqYOhVugj8O6lboMUlACiVuOEkUSlVFYXf9jQiHwm/222pYfEdpE7l9Fc43HflGkJrfRYMWXWXTeOEYQP5nJhuIGBxnN51s3qGtXxiu1i2S6yybDCmU5hIZRNMU/86PaqT9RsjUw+zfMgFqBs=) 2025-05-23 00:24:44.294148 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDzXsTkXydVKYDvQ/PDw4BugsPD/wyLMu2WtgARuW+jz4Xi5LEQJeNohNR7WpX1UdKr9095jsR8Acl2hR5K6q3E=) 2025-05-23 00:24:44.294669 | orchestrator | 2025-05-23 00:24:44.295290 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-23 00:24:44.295774 | orchestrator | Friday 23 May 2025 00:24:44 +0000 (0:00:01.167) 0:00:07.280 ************ 2025-05-23 00:24:45.336577 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDAoCFnTstHOsxRE70HScGposPBdUzsKx7z5NZbH7zdTYM7A5rcR6pZH9gU0ZOftWI9GtdP9plABAjqi0cBKF0X6RazkT1bMozaIz0L22ABotb2Lub+ukXGVogNm2E3eLCFkhbWM6MYs9osZVF/5IOasy27ed9BfPmHff0TtqdRX31l6jrN9vHt7m/rDXgKXpD1euXrkSsmdb66bpr/6WfHDyi762g96q5NX5O1W1nIJ28VmKMTby2a1Bcbt2QZUzf1alBwP0m+r3vxT2x5Z88DL/r0AnO6YMqaSVdR3XypCUO5lruVp4/53w2ymppaUb65SXv6cbnyzmY2sc2H5/z9Rjmj/if/fcAmSrv2wXiEsE9gqp3LnwIOtYrfhebAO08MT301fGWGIt6Z514VLC9W1kqEo+v1Sk9Z2z0olXgaJN1W4In9uuTH2Oz++l6s8GrWBe2g/VbAhzb3IQ8iCaFFqAhlinCoFmTbnI3SxbDwLIFkuiiYQNuwBKIXo2Aa0V0=) 2025-05-23 00:24:45.337196 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC7wPAH875ludxmTt3tTEMFgr83fiZK8uuitykFsX0BJ) 2025-05-23 00:24:45.337789 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMm8N0yFWmzdlaHJj9jnymoDakY8Ly37TWc7Rramy+Xbs6ca/vTnf5pqnEl2B9UFobn9WocEHTJmuCgVMu4uM9Y=) 2025-05-23 00:24:45.338507 | orchestrator | 2025-05-23 00:24:45.339156 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-23 00:24:45.339565 | orchestrator | Friday 23 May 2025 00:24:45 +0000 (0:00:01.044) 0:00:08.325 ************ 2025-05-23 00:24:46.365556 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4ATMb/SCObKMWqB6H5RNM3GLAZQLfyYe/9KHrwvA8b2YG9O+kkPLfi3YxCzQZpe37wo/qBpuLFwYy84iyzw0/0McSitmIXN8AzLokn0vmYzPTQ4r2XBzbtjHXMpKmEt7hIuT3J9QI3c+gJVe6c0iG/8aj8bUxwM/2cvvB4aqBWBOb1kGd1Apdgv3fd1Zu2bxnxhCPVIHXB5wGNwHyUS+G4ymsbGQS5v3sucPcX11lavRK/Jdh0D32n2RfeiVrFhGolNgbH0VOy2bD63wny36S+FCjJDEK7ncneUS15cLjg03DR8AFELc9Ap9zWfpBMwwMiZiMmpake7azp6fxP8+/5caYMHnlHpFcb2RIF8eVrywbVOXao//tw619+FdPdxy/ZwUuO1fUERx8ionosQ0aW7NqSH+9a3A+JKiL7ZKAxqkZfzFXuuBV+9goMSYu6v1Pj0uyZs8U9734/GHFcTnJGHoeYlkak6POPR9z/vzMY35eb7t8qJsVH+DkWQ0x/As=) 2025-05-23 00:24:46.366289 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJbQvtr2lpw7mWeBwDrPEFvbnpAEW7YfKMgNBIL05Flf) 2025-05-23 00:24:46.367224 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI5I+NCkw4pvsDXURcOEAGYfDoTJSErQSMuq37s+rRPEPDkAp5XEidAysCwxK1Do2ogoup7GL1Htn/prLmABMR0=) 2025-05-23 00:24:46.368333 | orchestrator | 2025-05-23 00:24:46.368769 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-23 00:24:46.369645 | orchestrator | Friday 23 May 2025 00:24:46 +0000 (0:00:01.028) 0:00:09.353 ************ 2025-05-23 00:24:47.411433 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCv1dKyXJ6X8vznn7lasumEwKqDzGIezsPPfyOXhWBGlwZCXX6XXKOMCnNOPS0hOw9nYxq3XCygC+v9boqIB0Nm8/xeQemPjn3lrAd9EKI7o0nT/dP19ZvLGLNsbuO1xYbdXX/GQLYpilhFcTIKI4sb6f/ZFO3Cd1cLRhvhi5bcVlO5A6naBhfH+VqJlPrQvcduffsGIyxUE69mS6/DtWLj9Pp2Tzmi50nWjWLdZJJI3p2cveZJV2XpX8Me9bgTqXXpa4nbJgkZHYlmr+MwKWyCAqtSfTxXp9fu7uQsN7Go+QFTNEW6treh1zSlFeXtYyhKC0t4NFhopOFYAi5u/HKnv73GSmYUejCUvfsO8f8/9tNtuVCZCB7os3batd7T+slgavfOInPQ5/W6n1ejlPkuVnP430HlnBWZSgOP/YIuYwnJL6xmt900BqcfqpbYydZv4PJMqC7xx8TGWT+ElwvSwVKo+ZKEYtq8yBpLoWugZjCybcxsIAY/sDCTA5G8uRk=) 2025-05-23 00:24:47.411660 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDIzaURGs8+nfSlRkkGPka26/JAqkoYRYYsFl/xfOGYzBpVBbtQnRO6wPlr5qwo25HKlnj6a1XyOdj1pg6g9E6Q=) 2025-05-23 00:24:47.411802 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPbQQGek3JMWAEyTpGHUrQMNb2zTvLCsGOvoEtt0EZGf) 2025-05-23 00:24:47.412157 | orchestrator | 2025-05-23 00:24:47.412390 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-23 00:24:47.412663 | orchestrator | Friday 23 May 2025 00:24:47 +0000 (0:00:01.046) 0:00:10.400 ************ 2025-05-23 00:24:48.402363 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCvXXiNf2gnhobt4xVg6G54WBCeK4puwZSq8m65kqUlYrMdQtO0HCbX16b/UYUpXfXQXTE0325ggJI/bk0liOgcas+gEez0zhlSZ9ycIy10u9xyrbHDmGbMtIbuPyfMR1e/R7w6Cu2dsKPJmQXFCKmtlRQU2HsJri9xUm1dlIcjkH90YcABGKdy3NUxdBUiAjmWkWlSuVN50MqcLNXUHZfAho4egwGi9raELm/d0khGxl7+GbZxUKfkHTexdznyVuSUGb2iQkuD0OvrhGfLQQVmlfhUo7dIoRnpekpMuh6AKN0LJrqJbUYzVEN1ZRFwBzowLzAR8dQykikhZgMndx6yqyB7eFS1BNKatajomnHMR5CoIo2DqbTnBoLM28Zn9jX1KzMQ28qbXr23IE9+SM94ClTrEMRVk4s6zCfeyZzKSTo92MA6UpZ1wz8XeLILhIv62mtTXANhJhHgikB+LfDWEFghmz4/kOoJ5/wU/j5iY04LDlFjgfRRKMJWS2w3Uzs=) 2025-05-23 00:24:48.402847 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCNK2HHgNsabmSdZarPJvmSys6AO+UyViV5vOql45vvdCgEFW28BzWfv06XaY7ltd8+Bvptmckqd81MA1qo454I=) 2025-05-23 00:24:48.403706 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINnd7U0rnatoGRjvZYEQY7k4r/JCC/n4AyAb/ZwnXbBF) 2025-05-23 00:24:48.404502 | orchestrator | 2025-05-23 00:24:48.405049 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-23 00:24:48.405567 | orchestrator | Friday 23 May 2025 00:24:48 +0000 (0:00:00.982) 0:00:11.382 ************ 2025-05-23 00:24:49.408404 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCs+U+uE2dn+ILEoKq57rqrG+BaUFr1UA5ZYFShuJmC0Xza8wBfe27wuhqUFHd9X5+BPifEyrYd1eA1+WbtZn5hbE8DZK78xba3mUO7HhyFIFM36s9amkZsLCLBfiZc2/4clLhpEbj9A1xi1OvGlKKObs7XjtHZc/C9jU2xZpkbbz88o6FIKLO7WSkJdyePtqaejJ5rehmUzclgoIOjKVh0JC0uo9/yYNY0s6m5rrn3BnlBKmXnQO8LOcdOUuJn6dvZLzWpLMFr8qfk+DdQ+dc7crlgZwXZAWZHW7j6fRHAOjYeu7hACI5LLLyrmdXxT7T5Ez9AkBQ2Wh15zsvDF2Sqh4jHM9jxKYiStwDPVADT0lO34W3gv47bjFY5+dNoB0JniPEbzVL8W3Kvo81Ge7HgqZWT2csS24H82qVcMDzEroTeW2O2YIKtIrJXC2HaMygMDh16cyGkuUzOYhmGHZ2aCS3dQY04B22OH49XpqkIHHSt4qpw+zMBMlNiJIUGSRk=) 2025-05-23 00:24:49.408833 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOQtcu5g+N7IE3RSWHxt1932T1RPs1+eVkH9D6Vk2zKYuEM2Y272ogKEpWJC6I8GsijCW3gT1sAxP03uMsgxBl8=) 2025-05-23 00:24:49.409152 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP6RQtpgeiMSqVDtb7gdkaj+BSK/zsGIr6tLNtAj8EMe) 2025-05-23 00:24:49.410109 | orchestrator | 2025-05-23 00:24:49.410136 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-23 00:24:49.410495 | orchestrator | Friday 23 May 2025 00:24:49 +0000 (0:00:01.013) 0:00:12.396 ************ 2025-05-23 00:24:50.444067 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCUYDujjqS4TU37JCtYlyyvGNokceIoAExkRwuzXA86xKuYIdzVGl0x+0Wb+cFOy/3qD3WEc5ZGrCn8ZA5RPvnOrv+9jD4w4MDB4wM+CiWQ6bbJCz20hwklEtLGeauuLpt/JsMe3intUQx4KqEOUFqsp7k668tecwX6XXIpn0Cgmto3VF+0tCxca+kCa8fegB00tsG4RYKCIVRJLqE7gC0wLcooPTBxjZzdfEyLPzYgI6gzuww/9d6gVun4BdMU7C4S9LNg/CoWMPLgZ9ulPVsdmIr/fM7K+ZMo7GwzSouNLButNIlVettYipmHvNegZB7HZCaCpHV+PzHg07vSqLhrj6bMQS5Ah3zcWbgfyz/UocpvPHDE53mmNDfrRvth41nWG+9eDh47PGb61lfSTeZcW6zaGR6r2MASD5QC6vt1UXG/9cBl2B8IX9OzKKDc3nCM9mHkEclb1ShgioVZRB+qbF54iZSvhZqf0QuOTllp5UM+Cfvh0NldJkgNbBLjba8=) 2025-05-23 00:24:50.445041 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEpSV2E4pZoOlIx0c9nvpsvzzq+otNhFJTfDfr3pq3fbcudpK9r47fYiEkWaTqi/pXaFl/0wEsxjS3L9sY6IkRI=) 2025-05-23 00:24:50.445708 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPI6KjnFUK2aFigvx7okrMFNnkKBBD6pVMIDxZSr04+S) 2025-05-23 00:24:50.446161 | orchestrator | 2025-05-23 00:24:50.446771 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-05-23 00:24:50.447046 | orchestrator | Friday 23 May 2025 00:24:50 +0000 (0:00:01.035) 0:00:13.431 ************ 2025-05-23 00:24:55.749514 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-05-23 00:24:55.750551 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-05-23 00:24:55.750623 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-05-23 00:24:55.751254 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-05-23 00:24:55.751606 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-05-23 00:24:55.753234 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-05-23 00:24:55.753671 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-05-23 00:24:55.754096 | orchestrator | 2025-05-23 00:24:55.754401 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-05-23 00:24:55.754967 | orchestrator | Friday 23 May 2025 00:24:55 +0000 (0:00:05.305) 0:00:18.737 ************ 2025-05-23 00:24:55.919233 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-05-23 00:24:55.919359 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-05-23 00:24:55.919376 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-05-23 00:24:55.919409 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-05-23 00:24:55.919907 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-05-23 00:24:55.920327 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-05-23 00:24:55.920945 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-05-23 00:24:55.921266 | orchestrator | 2025-05-23 00:24:55.921816 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-23 00:24:55.922252 | orchestrator | Friday 23 May 2025 00:24:55 +0000 (0:00:00.169) 0:00:18.906 ************ 2025-05-23 00:24:56.933475 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFYL4enls7MgtWzLEqqgUsoidRWrEWqK9PJCLJp9wl5F) 2025-05-23 00:24:56.933605 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwJCfAhne0v6fc3i4bJRYesH1pmYHafuxJQdTm5YMo/r0EzSQFjfhVxoNW1pCKAyHexOfN7IIdytQ+G9O6J5t6gQmyODWT3kMJfW9ZgzKDsjRhgw6bPW//bfX5/5OnJQFFU5vL0s0HNcDcXg9S9zC+c7YSFaBTZfr97MKgnsE1bLkR63pDGC3LybKvj/0Q/KJxpIfKcRVGWdAfyJx2pPX9Pki7ziOq/6a8zQnV9fAKXzHLramm16KOM/xZk01RsHRzBrvCN9h4h1j2TGcZphyyN7Fs+ktGCd1n6EH2VVq6UpXTllvLnZ532aJagt50bEJ7492iHPLBcra6d5jDSMc1XxheCDn7lWUn6RWsCSyQOk6SdQpqYOhVugj8O6lboMUlACiVuOEkUSlVFYXf9jQiHwm/222pYfEdpE7l9Fc43HflGkJrfRYMWXWXTeOEYQP5nJhuIGBxnN51s3qGtXxiu1i2S6yybDCmU5hIZRNMU/86PaqT9RsjUw+zfMgFqBs=) 2025-05-23 00:24:56.934237 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDzXsTkXydVKYDvQ/PDw4BugsPD/wyLMu2WtgARuW+jz4Xi5LEQJeNohNR7WpX1UdKr9095jsR8Acl2hR5K6q3E=) 2025-05-23 00:24:56.934802 | orchestrator | 2025-05-23 00:24:56.936272 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-23 00:24:56.936699 | orchestrator | Friday 23 May 2025 00:24:56 +0000 (0:00:01.014) 0:00:19.920 ************ 2025-05-23 00:24:57.943854 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDAoCFnTstHOsxRE70HScGposPBdUzsKx7z5NZbH7zdTYM7A5rcR6pZH9gU0ZOftWI9GtdP9plABAjqi0cBKF0X6RazkT1bMozaIz0L22ABotb2Lub+ukXGVogNm2E3eLCFkhbWM6MYs9osZVF/5IOasy27ed9BfPmHff0TtqdRX31l6jrN9vHt7m/rDXgKXpD1euXrkSsmdb66bpr/6WfHDyi762g96q5NX5O1W1nIJ28VmKMTby2a1Bcbt2QZUzf1alBwP0m+r3vxT2x5Z88DL/r0AnO6YMqaSVdR3XypCUO5lruVp4/53w2ymppaUb65SXv6cbnyzmY2sc2H5/z9Rjmj/if/fcAmSrv2wXiEsE9gqp3LnwIOtYrfhebAO08MT301fGWGIt6Z514VLC9W1kqEo+v1Sk9Z2z0olXgaJN1W4In9uuTH2Oz++l6s8GrWBe2g/VbAhzb3IQ8iCaFFqAhlinCoFmTbnI3SxbDwLIFkuiiYQNuwBKIXo2Aa0V0=) 2025-05-23 00:24:57.944009 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMm8N0yFWmzdlaHJj9jnymoDakY8Ly37TWc7Rramy+Xbs6ca/vTnf5pqnEl2B9UFobn9WocEHTJmuCgVMu4uM9Y=) 2025-05-23 00:24:57.944039 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC7wPAH875ludxmTt3tTEMFgr83fiZK8uuitykFsX0BJ) 2025-05-23 00:24:57.944060 | orchestrator | 2025-05-23 00:24:57.944168 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-23 00:24:57.944591 | orchestrator | Friday 23 May 2025 00:24:57 +0000 (0:00:01.008) 0:00:20.929 ************ 2025-05-23 00:24:58.945542 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJbQvtr2lpw7mWeBwDrPEFvbnpAEW7YfKMgNBIL05Flf) 2025-05-23 00:24:58.945770 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4ATMb/SCObKMWqB6H5RNM3GLAZQLfyYe/9KHrwvA8b2YG9O+kkPLfi3YxCzQZpe37wo/qBpuLFwYy84iyzw0/0McSitmIXN8AzLokn0vmYzPTQ4r2XBzbtjHXMpKmEt7hIuT3J9QI3c+gJVe6c0iG/8aj8bUxwM/2cvvB4aqBWBOb1kGd1Apdgv3fd1Zu2bxnxhCPVIHXB5wGNwHyUS+G4ymsbGQS5v3sucPcX11lavRK/Jdh0D32n2RfeiVrFhGolNgbH0VOy2bD63wny36S+FCjJDEK7ncneUS15cLjg03DR8AFELc9Ap9zWfpBMwwMiZiMmpake7azp6fxP8+/5caYMHnlHpFcb2RIF8eVrywbVOXao//tw619+FdPdxy/ZwUuO1fUERx8ionosQ0aW7NqSH+9a3A+JKiL7ZKAxqkZfzFXuuBV+9goMSYu6v1Pj0uyZs8U9734/GHFcTnJGHoeYlkak6POPR9z/vzMY35eb7t8qJsVH+DkWQ0x/As=) 2025-05-23 00:24:58.946096 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI5I+NCkw4pvsDXURcOEAGYfDoTJSErQSMuq37s+rRPEPDkAp5XEidAysCwxK1Do2ogoup7GL1Htn/prLmABMR0=) 2025-05-23 00:24:58.946129 | orchestrator | 2025-05-23 00:24:58.946403 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-23 00:24:58.946879 | orchestrator | Friday 23 May 2025 00:24:58 +0000 (0:00:01.001) 0:00:21.931 ************ 2025-05-23 00:24:59.948091 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDIzaURGs8+nfSlRkkGPka26/JAqkoYRYYsFl/xfOGYzBpVBbtQnRO6wPlr5qwo25HKlnj6a1XyOdj1pg6g9E6Q=) 2025-05-23 00:24:59.948249 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCv1dKyXJ6X8vznn7lasumEwKqDzGIezsPPfyOXhWBGlwZCXX6XXKOMCnNOPS0hOw9nYxq3XCygC+v9boqIB0Nm8/xeQemPjn3lrAd9EKI7o0nT/dP19ZvLGLNsbuO1xYbdXX/GQLYpilhFcTIKI4sb6f/ZFO3Cd1cLRhvhi5bcVlO5A6naBhfH+VqJlPrQvcduffsGIyxUE69mS6/DtWLj9Pp2Tzmi50nWjWLdZJJI3p2cveZJV2XpX8Me9bgTqXXpa4nbJgkZHYlmr+MwKWyCAqtSfTxXp9fu7uQsN7Go+QFTNEW6treh1zSlFeXtYyhKC0t4NFhopOFYAi5u/HKnv73GSmYUejCUvfsO8f8/9tNtuVCZCB7os3batd7T+slgavfOInPQ5/W6n1ejlPkuVnP430HlnBWZSgOP/YIuYwnJL6xmt900BqcfqpbYydZv4PJMqC7xx8TGWT+ElwvSwVKo+ZKEYtq8yBpLoWugZjCybcxsIAY/sDCTA5G8uRk=) 2025-05-23 00:24:59.948399 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPbQQGek3JMWAEyTpGHUrQMNb2zTvLCsGOvoEtt0EZGf) 2025-05-23 00:24:59.948643 | orchestrator | 2025-05-23 00:24:59.949517 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-23 00:24:59.950377 | orchestrator | Friday 23 May 2025 00:24:59 +0000 (0:00:01.003) 0:00:22.934 ************ 2025-05-23 00:25:00.949267 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCvXXiNf2gnhobt4xVg6G54WBCeK4puwZSq8m65kqUlYrMdQtO0HCbX16b/UYUpXfXQXTE0325ggJI/bk0liOgcas+gEez0zhlSZ9ycIy10u9xyrbHDmGbMtIbuPyfMR1e/R7w6Cu2dsKPJmQXFCKmtlRQU2HsJri9xUm1dlIcjkH90YcABGKdy3NUxdBUiAjmWkWlSuVN50MqcLNXUHZfAho4egwGi9raELm/d0khGxl7+GbZxUKfkHTexdznyVuSUGb2iQkuD0OvrhGfLQQVmlfhUo7dIoRnpekpMuh6AKN0LJrqJbUYzVEN1ZRFwBzowLzAR8dQykikhZgMndx6yqyB7eFS1BNKatajomnHMR5CoIo2DqbTnBoLM28Zn9jX1KzMQ28qbXr23IE9+SM94ClTrEMRVk4s6zCfeyZzKSTo92MA6UpZ1wz8XeLILhIv62mtTXANhJhHgikB+LfDWEFghmz4/kOoJ5/wU/j5iY04LDlFjgfRRKMJWS2w3Uzs=) 2025-05-23 00:25:00.949380 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCNK2HHgNsabmSdZarPJvmSys6AO+UyViV5vOql45vvdCgEFW28BzWfv06XaY7ltd8+Bvptmckqd81MA1qo454I=) 2025-05-23 00:25:00.949400 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINnd7U0rnatoGRjvZYEQY7k4r/JCC/n4AyAb/ZwnXbBF) 2025-05-23 00:25:00.949414 | orchestrator | 2025-05-23 00:25:00.949426 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-23 00:25:00.949439 | orchestrator | Friday 23 May 2025 00:25:00 +0000 (0:00:01.000) 0:00:23.935 ************ 2025-05-23 00:25:01.982848 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCs+U+uE2dn+ILEoKq57rqrG+BaUFr1UA5ZYFShuJmC0Xza8wBfe27wuhqUFHd9X5+BPifEyrYd1eA1+WbtZn5hbE8DZK78xba3mUO7HhyFIFM36s9amkZsLCLBfiZc2/4clLhpEbj9A1xi1OvGlKKObs7XjtHZc/C9jU2xZpkbbz88o6FIKLO7WSkJdyePtqaejJ5rehmUzclgoIOjKVh0JC0uo9/yYNY0s6m5rrn3BnlBKmXnQO8LOcdOUuJn6dvZLzWpLMFr8qfk+DdQ+dc7crlgZwXZAWZHW7j6fRHAOjYeu7hACI5LLLyrmdXxT7T5Ez9AkBQ2Wh15zsvDF2Sqh4jHM9jxKYiStwDPVADT0lO34W3gv47bjFY5+dNoB0JniPEbzVL8W3Kvo81Ge7HgqZWT2csS24H82qVcMDzEroTeW2O2YIKtIrJXC2HaMygMDh16cyGkuUzOYhmGHZ2aCS3dQY04B22OH49XpqkIHHSt4qpw+zMBMlNiJIUGSRk=) 2025-05-23 00:25:01.983069 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOQtcu5g+N7IE3RSWHxt1932T1RPs1+eVkH9D6Vk2zKYuEM2Y272ogKEpWJC6I8GsijCW3gT1sAxP03uMsgxBl8=) 2025-05-23 00:25:01.983229 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP6RQtpgeiMSqVDtb7gdkaj+BSK/zsGIr6tLNtAj8EMe) 2025-05-23 00:25:01.983664 | orchestrator | 2025-05-23 00:25:01.984509 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-23 00:25:01.984774 | orchestrator | Friday 23 May 2025 00:25:01 +0000 (0:00:01.034) 0:00:24.969 ************ 2025-05-23 00:25:03.021963 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEpSV2E4pZoOlIx0c9nvpsvzzq+otNhFJTfDfr3pq3fbcudpK9r47fYiEkWaTqi/pXaFl/0wEsxjS3L9sY6IkRI=) 2025-05-23 00:25:03.022304 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPI6KjnFUK2aFigvx7okrMFNnkKBBD6pVMIDxZSr04+S) 2025-05-23 00:25:03.022486 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCUYDujjqS4TU37JCtYlyyvGNokceIoAExkRwuzXA86xKuYIdzVGl0x+0Wb+cFOy/3qD3WEc5ZGrCn8ZA5RPvnOrv+9jD4w4MDB4wM+CiWQ6bbJCz20hwklEtLGeauuLpt/JsMe3intUQx4KqEOUFqsp7k668tecwX6XXIpn0Cgmto3VF+0tCxca+kCa8fegB00tsG4RYKCIVRJLqE7gC0wLcooPTBxjZzdfEyLPzYgI6gzuww/9d6gVun4BdMU7C4S9LNg/CoWMPLgZ9ulPVsdmIr/fM7K+ZMo7GwzSouNLButNIlVettYipmHvNegZB7HZCaCpHV+PzHg07vSqLhrj6bMQS5Ah3zcWbgfyz/UocpvPHDE53mmNDfrRvth41nWG+9eDh47PGb61lfSTeZcW6zaGR6r2MASD5QC6vt1UXG/9cBl2B8IX9OzKKDc3nCM9mHkEclb1ShgioVZRB+qbF54iZSvhZqf0QuOTllp5UM+Cfvh0NldJkgNbBLjba8=) 2025-05-23 00:25:03.023225 | orchestrator | 2025-05-23 00:25:03.023671 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-05-23 00:25:03.024140 | orchestrator | Friday 23 May 2025 00:25:03 +0000 (0:00:01.038) 0:00:26.007 ************ 2025-05-23 00:25:03.180668 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-05-23 00:25:03.180759 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-05-23 00:25:03.181370 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-05-23 00:25:03.181942 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-05-23 00:25:03.182216 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-23 00:25:03.182643 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-05-23 00:25:03.183241 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-05-23 00:25:03.183896 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:25:03.184245 | orchestrator | 2025-05-23 00:25:03.184755 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-05-23 00:25:03.185187 | orchestrator | Friday 23 May 2025 00:25:03 +0000 (0:00:00.162) 0:00:26.170 ************ 2025-05-23 00:25:03.236879 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:25:03.237360 | orchestrator | 2025-05-23 00:25:03.238584 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-05-23 00:25:03.239356 | orchestrator | Friday 23 May 2025 00:25:03 +0000 (0:00:00.054) 0:00:26.225 ************ 2025-05-23 00:25:03.297294 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:25:03.297456 | orchestrator | 2025-05-23 00:25:03.297825 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-05-23 00:25:03.299021 | orchestrator | Friday 23 May 2025 00:25:03 +0000 (0:00:00.061) 0:00:26.286 ************ 2025-05-23 00:25:03.991973 | orchestrator | changed: [testbed-manager] 2025-05-23 00:25:03.992837 | orchestrator | 2025-05-23 00:25:03.993755 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 00:25:03.994561 | orchestrator | 2025-05-23 00:25:03 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-23 00:25:03.995162 | orchestrator | 2025-05-23 00:25:03 | INFO  | Please wait and do not abort execution. 2025-05-23 00:25:03.996573 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-23 00:25:03.997632 | orchestrator | 2025-05-23 00:25:03.998381 | orchestrator | Friday 23 May 2025 00:25:03 +0000 (0:00:00.692) 0:00:26.979 ************ 2025-05-23 00:25:03.999168 | orchestrator | =============================================================================== 2025-05-23 00:25:04.000189 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.85s 2025-05-23 00:25:04.000839 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.31s 2025-05-23 00:25:04.001646 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2025-05-23 00:25:04.002383 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-05-23 00:25:04.003011 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-05-23 00:25:04.005757 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-05-23 00:25:04.007179 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-05-23 00:25:04.007941 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-05-23 00:25:04.008796 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-05-23 00:25:04.009819 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-05-23 00:25:04.010282 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-05-23 00:25:04.010915 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-05-23 00:25:04.011890 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2025-05-23 00:25:04.012393 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2025-05-23 00:25:04.013113 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2025-05-23 00:25:04.013409 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.98s 2025-05-23 00:25:04.013997 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.69s 2025-05-23 00:25:04.014538 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2025-05-23 00:25:04.015009 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.16s 2025-05-23 00:25:04.015420 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2025-05-23 00:25:04.345772 | orchestrator | + osism apply squid 2025-05-23 00:25:05.767839 | orchestrator | 2025-05-23 00:25:05 | INFO  | Task af3afc6c-2e27-4839-a21a-bd3cd5f7665d (squid) was prepared for execution. 2025-05-23 00:25:05.767965 | orchestrator | 2025-05-23 00:25:05 | INFO  | It takes a moment until task af3afc6c-2e27-4839-a21a-bd3cd5f7665d (squid) has been started and output is visible here. 2025-05-23 00:25:08.471293 | orchestrator | 2025-05-23 00:25:08.471377 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-05-23 00:25:08.471610 | orchestrator | 2025-05-23 00:25:08.472064 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-05-23 00:25:08.473419 | orchestrator | Friday 23 May 2025 00:25:08 +0000 (0:00:00.077) 0:00:00.077 ************ 2025-05-23 00:25:08.551595 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-05-23 00:25:08.552022 | orchestrator | 2025-05-23 00:25:08.552984 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-05-23 00:25:08.553188 | orchestrator | Friday 23 May 2025 00:25:08 +0000 (0:00:00.083) 0:00:00.160 ************ 2025-05-23 00:25:09.597161 | orchestrator | ok: [testbed-manager] 2025-05-23 00:25:09.597992 | orchestrator | 2025-05-23 00:25:09.598911 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-05-23 00:25:09.599280 | orchestrator | Friday 23 May 2025 00:25:09 +0000 (0:00:01.043) 0:00:01.204 ************ 2025-05-23 00:25:10.576651 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-05-23 00:25:10.577787 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-05-23 00:25:10.578485 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-05-23 00:25:10.578972 | orchestrator | 2025-05-23 00:25:10.579729 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-05-23 00:25:10.580012 | orchestrator | Friday 23 May 2025 00:25:10 +0000 (0:00:00.979) 0:00:02.183 ************ 2025-05-23 00:25:11.523305 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-05-23 00:25:11.523499 | orchestrator | 2025-05-23 00:25:11.524322 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-05-23 00:25:11.525132 | orchestrator | Friday 23 May 2025 00:25:11 +0000 (0:00:00.947) 0:00:03.131 ************ 2025-05-23 00:25:11.862278 | orchestrator | ok: [testbed-manager] 2025-05-23 00:25:11.862452 | orchestrator | 2025-05-23 00:25:11.863473 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-05-23 00:25:11.864045 | orchestrator | Friday 23 May 2025 00:25:11 +0000 (0:00:00.338) 0:00:03.469 ************ 2025-05-23 00:25:12.758959 | orchestrator | changed: [testbed-manager] 2025-05-23 00:25:12.759080 | orchestrator | 2025-05-23 00:25:12.759417 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-05-23 00:25:12.759899 | orchestrator | Friday 23 May 2025 00:25:12 +0000 (0:00:00.895) 0:00:04.365 ************ 2025-05-23 00:25:44.029515 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-05-23 00:25:44.029634 | orchestrator | ok: [testbed-manager] 2025-05-23 00:25:44.029650 | orchestrator | 2025-05-23 00:25:44.029663 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-05-23 00:25:44.029786 | orchestrator | Friday 23 May 2025 00:25:44 +0000 (0:00:31.268) 0:00:35.634 ************ 2025-05-23 00:25:56.607274 | orchestrator | changed: [testbed-manager] 2025-05-23 00:25:56.607443 | orchestrator | 2025-05-23 00:25:56.607462 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-05-23 00:25:56.607477 | orchestrator | Friday 23 May 2025 00:25:56 +0000 (0:00:12.574) 0:00:48.208 ************ 2025-05-23 00:26:56.678727 | orchestrator | Pausing for 60 seconds 2025-05-23 00:26:56.678848 | orchestrator | changed: [testbed-manager] 2025-05-23 00:26:56.678866 | orchestrator | 2025-05-23 00:26:56.678879 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-05-23 00:26:56.678891 | orchestrator | Friday 23 May 2025 00:26:56 +0000 (0:01:00.072) 0:01:48.281 ************ 2025-05-23 00:26:56.741499 | orchestrator | ok: [testbed-manager] 2025-05-23 00:26:56.742265 | orchestrator | 2025-05-23 00:26:56.743188 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-05-23 00:26:56.743559 | orchestrator | Friday 23 May 2025 00:26:56 +0000 (0:00:00.068) 0:01:48.349 ************ 2025-05-23 00:26:57.331909 | orchestrator | changed: [testbed-manager] 2025-05-23 00:26:57.332554 | orchestrator | 2025-05-23 00:26:57.334104 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 00:26:57.334157 | orchestrator | 2025-05-23 00:26:57 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-23 00:26:57.334397 | orchestrator | 2025-05-23 00:26:57 | INFO  | Please wait and do not abort execution. 2025-05-23 00:26:57.335185 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 00:26:57.335733 | orchestrator | 2025-05-23 00:26:57.336644 | orchestrator | Friday 23 May 2025 00:26:57 +0000 (0:00:00.589) 0:01:48.939 ************ 2025-05-23 00:26:57.337020 | orchestrator | =============================================================================== 2025-05-23 00:26:57.337608 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.07s 2025-05-23 00:26:57.338420 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.27s 2025-05-23 00:26:57.338960 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.57s 2025-05-23 00:26:57.339300 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.04s 2025-05-23 00:26:57.340058 | orchestrator | osism.services.squid : Create required directories ---------------------- 0.98s 2025-05-23 00:26:57.340755 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 0.95s 2025-05-23 00:26:57.341506 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.90s 2025-05-23 00:26:57.341702 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.59s 2025-05-23 00:26:57.342211 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.34s 2025-05-23 00:26:57.342735 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2025-05-23 00:26:57.343138 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-05-23 00:26:57.722614 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-23 00:26:57.722711 | orchestrator | + sed -i 's#docker_namespace: kolla#docker_namespace: kolla/release#' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-05-23 00:26:57.725437 | orchestrator | ++ semver 8.1.0 9.0.0 2025-05-23 00:26:57.772934 | orchestrator | + [[ -1 -lt 0 ]] 2025-05-23 00:26:57.772998 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-23 00:26:57.773013 | orchestrator | + sed -i 's|^# \(network_dispatcher_scripts:\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml 2025-05-23 00:26:57.777914 | orchestrator | + sed -i 's|^# \( - src: /opt/configuration/network/vxlan.sh\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml /opt/configuration/inventory/group_vars/testbed-managers.yml 2025-05-23 00:26:57.783133 | orchestrator | + sed -i 's|^# \( dest: routable.d/vxlan.sh\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml /opt/configuration/inventory/group_vars/testbed-managers.yml 2025-05-23 00:26:57.787730 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-05-23 00:26:59.172436 | orchestrator | 2025-05-23 00:26:59 | INFO  | Task df174c85-0809-4a35-8b54-993133bafc4c (operator) was prepared for execution. 2025-05-23 00:26:59.172542 | orchestrator | 2025-05-23 00:26:59 | INFO  | It takes a moment until task df174c85-0809-4a35-8b54-993133bafc4c (operator) has been started and output is visible here. 2025-05-23 00:27:02.050940 | orchestrator | 2025-05-23 00:27:02.051728 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-05-23 00:27:02.054978 | orchestrator | 2025-05-23 00:27:02.055831 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-23 00:27:02.056805 | orchestrator | Friday 23 May 2025 00:27:02 +0000 (0:00:00.083) 0:00:00.083 ************ 2025-05-23 00:27:05.342683 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:27:05.342805 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:27:05.343870 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:27:05.345115 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:27:05.346963 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:27:05.347488 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:27:05.348075 | orchestrator | 2025-05-23 00:27:05.348816 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-05-23 00:27:05.349799 | orchestrator | Friday 23 May 2025 00:27:05 +0000 (0:00:03.293) 0:00:03.376 ************ 2025-05-23 00:27:06.120723 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:27:06.120830 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:27:06.120845 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:27:06.120959 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:27:06.121616 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:27:06.121892 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:27:06.124625 | orchestrator | 2025-05-23 00:27:06.124821 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-05-23 00:27:06.128269 | orchestrator | 2025-05-23 00:27:06.128826 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-05-23 00:27:06.129002 | orchestrator | Friday 23 May 2025 00:27:06 +0000 (0:00:00.771) 0:00:04.147 ************ 2025-05-23 00:27:06.188306 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:27:06.209377 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:27:06.232650 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:27:06.271706 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:27:06.272431 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:27:06.276035 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:27:06.276069 | orchestrator | 2025-05-23 00:27:06.276083 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-05-23 00:27:06.276097 | orchestrator | Friday 23 May 2025 00:27:06 +0000 (0:00:00.157) 0:00:04.305 ************ 2025-05-23 00:27:06.331265 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:27:06.350732 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:27:06.372003 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:27:06.408448 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:27:06.408612 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:27:06.409219 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:27:06.410160 | orchestrator | 2025-05-23 00:27:06.410652 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-05-23 00:27:06.413444 | orchestrator | Friday 23 May 2025 00:27:06 +0000 (0:00:00.137) 0:00:04.443 ************ 2025-05-23 00:27:07.051481 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:27:07.052487 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:27:07.052619 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:27:07.053742 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:27:07.053774 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:27:07.055220 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:27:07.055542 | orchestrator | 2025-05-23 00:27:07.058209 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-05-23 00:27:07.059052 | orchestrator | Friday 23 May 2025 00:27:07 +0000 (0:00:00.641) 0:00:05.085 ************ 2025-05-23 00:27:07.900854 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:27:07.900986 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:27:07.901892 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:27:07.901918 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:27:07.902930 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:27:07.903098 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:27:07.903226 | orchestrator | 2025-05-23 00:27:07.908194 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-05-23 00:27:07.908295 | orchestrator | Friday 23 May 2025 00:27:07 +0000 (0:00:00.846) 0:00:05.931 ************ 2025-05-23 00:27:09.095766 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-05-23 00:27:09.095925 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-05-23 00:27:09.097665 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-05-23 00:27:09.101717 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-05-23 00:27:09.101845 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-05-23 00:27:09.101859 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-05-23 00:27:09.101928 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-05-23 00:27:09.103090 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-05-23 00:27:09.103668 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-05-23 00:27:09.104478 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-05-23 00:27:09.104839 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-05-23 00:27:09.105299 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-05-23 00:27:09.106325 | orchestrator | 2025-05-23 00:27:09.106674 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-05-23 00:27:09.107052 | orchestrator | Friday 23 May 2025 00:27:09 +0000 (0:00:01.195) 0:00:07.127 ************ 2025-05-23 00:27:10.417202 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:27:10.417574 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:27:10.417975 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:27:10.418681 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:27:10.418805 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:27:10.420266 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:27:10.420457 | orchestrator | 2025-05-23 00:27:10.420917 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-05-23 00:27:10.421279 | orchestrator | Friday 23 May 2025 00:27:10 +0000 (0:00:01.321) 0:00:08.448 ************ 2025-05-23 00:27:11.621771 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-05-23 00:27:11.621874 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-05-23 00:27:11.621949 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-05-23 00:27:11.732772 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-05-23 00:27:11.735378 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-05-23 00:27:11.736173 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-05-23 00:27:11.736972 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-05-23 00:27:11.738208 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-05-23 00:27:11.739032 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-05-23 00:27:11.739486 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-05-23 00:27:11.740485 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-05-23 00:27:11.741403 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-05-23 00:27:11.743105 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-05-23 00:27:11.743156 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-05-23 00:27:11.743168 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-05-23 00:27:11.743221 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-05-23 00:27:11.744122 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-05-23 00:27:11.744992 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-05-23 00:27:11.745430 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-05-23 00:27:11.746248 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-05-23 00:27:11.746988 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-05-23 00:27:11.747975 | orchestrator | 2025-05-23 00:27:11.750604 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-05-23 00:27:11.750627 | orchestrator | Friday 23 May 2025 00:27:11 +0000 (0:00:01.316) 0:00:09.765 ************ 2025-05-23 00:27:12.289215 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:27:12.290401 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:27:12.290495 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:27:12.291539 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:27:12.291567 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:27:12.291755 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:27:12.292231 | orchestrator | 2025-05-23 00:27:12.295298 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-05-23 00:27:12.295328 | orchestrator | Friday 23 May 2025 00:27:12 +0000 (0:00:00.556) 0:00:10.322 ************ 2025-05-23 00:27:12.353352 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:27:12.373114 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:27:12.398306 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:27:12.441366 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:27:12.443808 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:27:12.444071 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:27:12.444710 | orchestrator | 2025-05-23 00:27:12.445061 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-05-23 00:27:12.445518 | orchestrator | Friday 23 May 2025 00:27:12 +0000 (0:00:00.151) 0:00:10.473 ************ 2025-05-23 00:27:13.135743 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-23 00:27:13.135861 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-23 00:27:13.135879 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:27:13.135893 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:27:13.135904 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-05-23 00:27:13.136172 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:27:13.136500 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-23 00:27:13.137192 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:27:13.137574 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-23 00:27:13.138181 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:27:13.139151 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-05-23 00:27:13.139665 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:27:13.139893 | orchestrator | 2025-05-23 00:27:13.140225 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-05-23 00:27:13.140912 | orchestrator | Friday 23 May 2025 00:27:13 +0000 (0:00:00.692) 0:00:11.166 ************ 2025-05-23 00:27:13.170183 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:27:13.190857 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:27:13.208500 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:27:13.225103 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:27:13.252905 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:27:13.252969 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:27:13.253735 | orchestrator | 2025-05-23 00:27:13.254517 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-05-23 00:27:13.255667 | orchestrator | Friday 23 May 2025 00:27:13 +0000 (0:00:00.117) 0:00:11.284 ************ 2025-05-23 00:27:13.292151 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:27:13.311634 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:27:13.330188 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:27:13.347967 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:27:13.373471 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:27:13.376183 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:27:13.376227 | orchestrator | 2025-05-23 00:27:13.376241 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-05-23 00:27:13.376498 | orchestrator | Friday 23 May 2025 00:27:13 +0000 (0:00:00.123) 0:00:11.407 ************ 2025-05-23 00:27:13.419235 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:27:13.441532 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:27:13.461656 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:27:13.481293 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:27:13.509686 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:27:13.510932 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:27:13.511773 | orchestrator | 2025-05-23 00:27:13.513029 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-05-23 00:27:13.513828 | orchestrator | Friday 23 May 2025 00:27:13 +0000 (0:00:00.135) 0:00:11.543 ************ 2025-05-23 00:27:14.171318 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:27:14.173688 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:27:14.174774 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:27:14.175677 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:27:14.176358 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:27:14.177191 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:27:14.178461 | orchestrator | 2025-05-23 00:27:14.179046 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-05-23 00:27:14.179940 | orchestrator | Friday 23 May 2025 00:27:14 +0000 (0:00:00.660) 0:00:12.203 ************ 2025-05-23 00:27:14.256974 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:27:14.280221 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:27:14.383144 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:27:14.384582 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:27:14.385520 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:27:14.387017 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:27:14.388411 | orchestrator | 2025-05-23 00:27:14.389303 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 00:27:14.390131 | orchestrator | 2025-05-23 00:27:14 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-23 00:27:14.391475 | orchestrator | 2025-05-23 00:27:14 | INFO  | Please wait and do not abort execution. 2025-05-23 00:27:14.392784 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-23 00:27:14.394134 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-23 00:27:14.395400 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-23 00:27:14.396763 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-23 00:27:14.397670 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-23 00:27:14.398455 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-23 00:27:14.398956 | orchestrator | 2025-05-23 00:27:14.399781 | orchestrator | Friday 23 May 2025 00:27:14 +0000 (0:00:00.214) 0:00:12.417 ************ 2025-05-23 00:27:14.400596 | orchestrator | =============================================================================== 2025-05-23 00:27:14.401592 | orchestrator | Gathering Facts --------------------------------------------------------- 3.29s 2025-05-23 00:27:14.402279 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.32s 2025-05-23 00:27:14.402953 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.32s 2025-05-23 00:27:14.403845 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.20s 2025-05-23 00:27:14.405224 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.85s 2025-05-23 00:27:14.405297 | orchestrator | Do not require tty for all users ---------------------------------------- 0.77s 2025-05-23 00:27:14.405512 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.69s 2025-05-23 00:27:14.406476 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.66s 2025-05-23 00:27:14.407050 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.64s 2025-05-23 00:27:14.407534 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.56s 2025-05-23 00:27:14.408150 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.21s 2025-05-23 00:27:14.408554 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.16s 2025-05-23 00:27:14.408815 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.15s 2025-05-23 00:27:14.409312 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.14s 2025-05-23 00:27:14.409700 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.14s 2025-05-23 00:27:14.410117 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.12s 2025-05-23 00:27:14.410993 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.12s 2025-05-23 00:27:14.782752 | orchestrator | + osism apply --environment custom facts 2025-05-23 00:27:16.106088 | orchestrator | 2025-05-23 00:27:16 | INFO  | Trying to run play facts in environment custom 2025-05-23 00:27:16.156536 | orchestrator | 2025-05-23 00:27:16 | INFO  | Task cee06b6b-f48e-4e2c-a5b7-0104d239b518 (facts) was prepared for execution. 2025-05-23 00:27:16.156597 | orchestrator | 2025-05-23 00:27:16 | INFO  | It takes a moment until task cee06b6b-f48e-4e2c-a5b7-0104d239b518 (facts) has been started and output is visible here. 2025-05-23 00:27:19.214977 | orchestrator | 2025-05-23 00:27:19.215090 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-05-23 00:27:19.215107 | orchestrator | 2025-05-23 00:27:19.215120 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-23 00:27:19.215199 | orchestrator | Friday 23 May 2025 00:27:19 +0000 (0:00:00.078) 0:00:00.078 ************ 2025-05-23 00:27:20.384347 | orchestrator | ok: [testbed-manager] 2025-05-23 00:27:21.447960 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:27:21.448153 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:27:21.451231 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:27:21.451729 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:27:21.453282 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:27:21.454100 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:27:21.454381 | orchestrator | 2025-05-23 00:27:21.454784 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-05-23 00:27:21.455261 | orchestrator | Friday 23 May 2025 00:27:21 +0000 (0:00:02.239) 0:00:02.318 ************ 2025-05-23 00:27:22.460017 | orchestrator | ok: [testbed-manager] 2025-05-23 00:27:23.336294 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:27:23.337279 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:27:23.337378 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:27:23.338178 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:27:23.338587 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:27:23.339739 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:27:23.339810 | orchestrator | 2025-05-23 00:27:23.340306 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-05-23 00:27:23.341086 | orchestrator | 2025-05-23 00:27:23.341211 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-23 00:27:23.342003 | orchestrator | Friday 23 May 2025 00:27:23 +0000 (0:00:01.884) 0:00:04.202 ************ 2025-05-23 00:27:23.433520 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:27:23.434123 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:27:23.435352 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:27:23.435965 | orchestrator | 2025-05-23 00:27:23.436991 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-23 00:27:23.437788 | orchestrator | Friday 23 May 2025 00:27:23 +0000 (0:00:00.101) 0:00:04.304 ************ 2025-05-23 00:27:23.540565 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:27:23.540729 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:27:23.541066 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:27:23.544523 | orchestrator | 2025-05-23 00:27:23.544722 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-23 00:27:23.544981 | orchestrator | Friday 23 May 2025 00:27:23 +0000 (0:00:00.107) 0:00:04.411 ************ 2025-05-23 00:27:23.634351 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:27:23.634521 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:27:23.636307 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:27:23.637880 | orchestrator | 2025-05-23 00:27:23.638872 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-23 00:27:23.641565 | orchestrator | Friday 23 May 2025 00:27:23 +0000 (0:00:00.093) 0:00:04.504 ************ 2025-05-23 00:27:23.775846 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 00:27:23.776227 | orchestrator | 2025-05-23 00:27:23.776766 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-23 00:27:23.779445 | orchestrator | Friday 23 May 2025 00:27:23 +0000 (0:00:00.137) 0:00:04.642 ************ 2025-05-23 00:27:24.190474 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:27:24.191667 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:27:24.194067 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:27:24.194095 | orchestrator | 2025-05-23 00:27:24.194106 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-23 00:27:24.196038 | orchestrator | Friday 23 May 2025 00:27:24 +0000 (0:00:00.418) 0:00:05.060 ************ 2025-05-23 00:27:24.275676 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:27:24.275909 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:27:24.275980 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:27:24.276184 | orchestrator | 2025-05-23 00:27:24.276539 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-23 00:27:24.276751 | orchestrator | Friday 23 May 2025 00:27:24 +0000 (0:00:00.086) 0:00:05.146 ************ 2025-05-23 00:27:25.169750 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:27:25.169843 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:27:25.170582 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:27:25.170622 | orchestrator | 2025-05-23 00:27:25.170967 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-23 00:27:25.171453 | orchestrator | Friday 23 May 2025 00:27:25 +0000 (0:00:00.889) 0:00:06.036 ************ 2025-05-23 00:27:25.613435 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:27:25.616361 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:27:25.616533 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:27:25.617274 | orchestrator | 2025-05-23 00:27:25.617703 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-23 00:27:25.618220 | orchestrator | Friday 23 May 2025 00:27:25 +0000 (0:00:00.447) 0:00:06.483 ************ 2025-05-23 00:27:26.754683 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:27:26.754786 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:27:26.757660 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:27:26.758003 | orchestrator | 2025-05-23 00:27:26.758466 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-23 00:27:26.758831 | orchestrator | Friday 23 May 2025 00:27:26 +0000 (0:00:01.137) 0:00:07.621 ************ 2025-05-23 00:27:40.055067 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:27:40.055186 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:27:40.055202 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:27:40.055214 | orchestrator | 2025-05-23 00:27:40.055227 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-05-23 00:27:40.055240 | orchestrator | Friday 23 May 2025 00:27:40 +0000 (0:00:13.293) 0:00:20.914 ************ 2025-05-23 00:27:40.136718 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:27:40.136814 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:27:40.141234 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:27:40.141292 | orchestrator | 2025-05-23 00:27:40.141328 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-05-23 00:27:40.141343 | orchestrator | Friday 23 May 2025 00:27:40 +0000 (0:00:00.092) 0:00:21.007 ************ 2025-05-23 00:27:47.203210 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:27:47.203375 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:27:47.203382 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:27:47.203387 | orchestrator | 2025-05-23 00:27:47.203394 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-23 00:27:47.203400 | orchestrator | Friday 23 May 2025 00:27:47 +0000 (0:00:07.060) 0:00:28.067 ************ 2025-05-23 00:27:47.633285 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:27:47.633769 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:27:47.634212 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:27:47.635573 | orchestrator | 2025-05-23 00:27:47.637990 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-05-23 00:27:47.639148 | orchestrator | Friday 23 May 2025 00:27:47 +0000 (0:00:00.435) 0:00:28.503 ************ 2025-05-23 00:27:51.119384 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-05-23 00:27:51.120147 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-05-23 00:27:51.120464 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-05-23 00:27:51.121274 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-05-23 00:27:51.121899 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-05-23 00:27:51.123687 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-05-23 00:27:51.124457 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-05-23 00:27:51.124481 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-05-23 00:27:51.125131 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-05-23 00:27:51.125428 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-05-23 00:27:51.126102 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-05-23 00:27:51.126407 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-05-23 00:27:51.127753 | orchestrator | 2025-05-23 00:27:51.127796 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-23 00:27:51.127924 | orchestrator | Friday 23 May 2025 00:27:51 +0000 (0:00:03.483) 0:00:31.986 ************ 2025-05-23 00:27:52.138555 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:27:52.138681 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:27:52.139016 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:27:52.139887 | orchestrator | 2025-05-23 00:27:52.143164 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-23 00:27:52.143190 | orchestrator | 2025-05-23 00:27:52.143700 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-23 00:27:52.144547 | orchestrator | Friday 23 May 2025 00:27:52 +0000 (0:00:01.019) 0:00:33.006 ************ 2025-05-23 00:27:53.881617 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:27:57.913656 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:27:57.914437 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:27:57.915473 | orchestrator | ok: [testbed-manager] 2025-05-23 00:27:57.916759 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:27:57.917788 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:27:57.918104 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:27:57.918490 | orchestrator | 2025-05-23 00:27:57.919716 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 00:27:57.920038 | orchestrator | 2025-05-23 00:27:57 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-23 00:27:57.920062 | orchestrator | 2025-05-23 00:27:57 | INFO  | Please wait and do not abort execution. 2025-05-23 00:27:57.920464 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 00:27:57.920698 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 00:27:57.921135 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 00:27:57.921930 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 00:27:57.922934 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-23 00:27:57.923328 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-23 00:27:57.923722 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-23 00:27:57.924398 | orchestrator | 2025-05-23 00:27:57.925106 | orchestrator | Friday 23 May 2025 00:27:57 +0000 (0:00:05.775) 0:00:38.781 ************ 2025-05-23 00:27:57.925800 | orchestrator | =============================================================================== 2025-05-23 00:27:57.926090 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.29s 2025-05-23 00:27:57.926524 | orchestrator | Install required packages (Debian) -------------------------------------- 7.06s 2025-05-23 00:27:57.926776 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.78s 2025-05-23 00:27:57.928166 | orchestrator | Copy fact files --------------------------------------------------------- 3.48s 2025-05-23 00:27:57.928372 | orchestrator | Create custom facts directory ------------------------------------------- 2.24s 2025-05-23 00:27:57.929100 | orchestrator | Copy fact file ---------------------------------------------------------- 1.88s 2025-05-23 00:27:57.929999 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.14s 2025-05-23 00:27:57.930408 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.02s 2025-05-23 00:27:57.931373 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 0.89s 2025-05-23 00:27:57.931982 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.45s 2025-05-23 00:27:57.932500 | orchestrator | Create custom facts directory ------------------------------------------- 0.44s 2025-05-23 00:27:57.933265 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.42s 2025-05-23 00:27:57.933628 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.14s 2025-05-23 00:27:57.934275 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.11s 2025-05-23 00:27:57.934958 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.10s 2025-05-23 00:27:57.935381 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.09s 2025-05-23 00:27:57.935839 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.09s 2025-05-23 00:27:57.936820 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.09s 2025-05-23 00:27:58.338184 | orchestrator | + osism apply bootstrap 2025-05-23 00:27:59.766363 | orchestrator | 2025-05-23 00:27:59 | INFO  | Task bd35cb96-c428-47a1-8271-4a7a270451c7 (bootstrap) was prepared for execution. 2025-05-23 00:27:59.766460 | orchestrator | 2025-05-23 00:27:59 | INFO  | It takes a moment until task bd35cb96-c428-47a1-8271-4a7a270451c7 (bootstrap) has been started and output is visible here. 2025-05-23 00:28:02.832957 | orchestrator | 2025-05-23 00:28:02.834641 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-05-23 00:28:02.836452 | orchestrator | 2025-05-23 00:28:02.836481 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-05-23 00:28:02.836830 | orchestrator | Friday 23 May 2025 00:28:02 +0000 (0:00:00.105) 0:00:00.105 ************ 2025-05-23 00:28:02.899535 | orchestrator | ok: [testbed-manager] 2025-05-23 00:28:02.922121 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:28:02.948640 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:28:02.971761 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:28:03.042324 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:28:03.042571 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:28:03.043667 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:28:03.043699 | orchestrator | 2025-05-23 00:28:03.047102 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-23 00:28:03.047132 | orchestrator | 2025-05-23 00:28:03.047145 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-23 00:28:03.047157 | orchestrator | Friday 23 May 2025 00:28:03 +0000 (0:00:00.213) 0:00:00.319 ************ 2025-05-23 00:28:06.633395 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:28:06.633551 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:28:06.633568 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:28:06.633605 | orchestrator | ok: [testbed-manager] 2025-05-23 00:28:06.633700 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:28:06.634486 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:28:06.635445 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:28:06.635658 | orchestrator | 2025-05-23 00:28:06.636472 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-05-23 00:28:06.637495 | orchestrator | 2025-05-23 00:28:06.638106 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-23 00:28:06.638734 | orchestrator | Friday 23 May 2025 00:28:06 +0000 (0:00:03.589) 0:00:03.909 ************ 2025-05-23 00:28:06.737429 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-05-23 00:28:06.737551 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-05-23 00:28:06.737564 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-05-23 00:28:06.737575 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-05-23 00:28:06.766736 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-05-23 00:28:06.766787 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-05-23 00:28:06.766800 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-23 00:28:06.766959 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-05-23 00:28:06.767217 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-23 00:28:06.767645 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-23 00:28:06.810974 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-05-23 00:28:06.811042 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-23 00:28:06.811142 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-23 00:28:06.811160 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-23 00:28:06.811451 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-23 00:28:07.088874 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-05-23 00:28:07.090132 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-23 00:28:07.091436 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:28:07.092225 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-23 00:28:07.093479 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-23 00:28:07.094926 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-05-23 00:28:07.096618 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-05-23 00:28:07.097341 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-23 00:28:07.098488 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-23 00:28:07.099245 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-23 00:28:07.100181 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-23 00:28:07.101040 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-23 00:28:07.102009 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-23 00:28:07.102729 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-23 00:28:07.103408 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-23 00:28:07.104262 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:28:07.105006 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-23 00:28:07.105971 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-23 00:28:07.106688 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-23 00:28:07.107135 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-23 00:28:07.107999 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-23 00:28:07.108845 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-23 00:28:07.109470 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-23 00:28:07.110810 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:28:07.111013 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:28:07.111621 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-23 00:28:07.112070 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-23 00:28:07.113062 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-05-23 00:28:07.113745 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-23 00:28:07.114126 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-23 00:28:07.114459 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:28:07.115176 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-23 00:28:07.115655 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-23 00:28:07.116374 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-23 00:28:07.116972 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-23 00:28:07.117275 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:28:07.120204 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-23 00:28:07.120229 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-23 00:28:07.120241 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-23 00:28:07.120253 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-23 00:28:07.120265 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:28:07.120277 | orchestrator | 2025-05-23 00:28:07.120686 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-05-23 00:28:07.121401 | orchestrator | 2025-05-23 00:28:07.121690 | orchestrator | TASK [osism.commons.hostname : Set hostname_name fact] ************************* 2025-05-23 00:28:07.122451 | orchestrator | Friday 23 May 2025 00:28:07 +0000 (0:00:00.454) 0:00:04.364 ************ 2025-05-23 00:28:07.162965 | orchestrator | ok: [testbed-manager] 2025-05-23 00:28:07.184272 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:28:07.216120 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:28:07.240184 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:28:07.294942 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:28:07.295646 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:28:07.296465 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:28:07.296997 | orchestrator | 2025-05-23 00:28:07.297899 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-05-23 00:28:07.298889 | orchestrator | Friday 23 May 2025 00:28:07 +0000 (0:00:00.208) 0:00:04.572 ************ 2025-05-23 00:28:08.472581 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:28:08.473600 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:28:08.474349 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:28:08.475491 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:28:08.475895 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:28:08.476987 | orchestrator | ok: [testbed-manager] 2025-05-23 00:28:08.478242 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:28:08.478891 | orchestrator | 2025-05-23 00:28:08.479716 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-05-23 00:28:08.480070 | orchestrator | Friday 23 May 2025 00:28:08 +0000 (0:00:01.175) 0:00:05.748 ************ 2025-05-23 00:28:09.642559 | orchestrator | ok: [testbed-manager] 2025-05-23 00:28:09.642892 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:28:09.643382 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:28:09.644499 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:28:09.645450 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:28:09.646259 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:28:09.647167 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:28:09.647868 | orchestrator | 2025-05-23 00:28:09.648435 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-05-23 00:28:09.648892 | orchestrator | Friday 23 May 2025 00:28:09 +0000 (0:00:01.168) 0:00:06.916 ************ 2025-05-23 00:28:09.914818 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:28:09.914955 | orchestrator | 2025-05-23 00:28:09.915768 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-05-23 00:28:09.917264 | orchestrator | Friday 23 May 2025 00:28:09 +0000 (0:00:00.274) 0:00:07.191 ************ 2025-05-23 00:28:11.825994 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:28:11.826831 | orchestrator | changed: [testbed-manager] 2025-05-23 00:28:11.829532 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:28:11.829625 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:28:11.829705 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:28:11.830897 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:28:11.831351 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:28:11.833342 | orchestrator | 2025-05-23 00:28:11.833846 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-05-23 00:28:11.834588 | orchestrator | Friday 23 May 2025 00:28:11 +0000 (0:00:01.909) 0:00:09.101 ************ 2025-05-23 00:28:11.895925 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:28:12.063858 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:28:12.064391 | orchestrator | 2025-05-23 00:28:12.064827 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-05-23 00:28:12.065713 | orchestrator | Friday 23 May 2025 00:28:12 +0000 (0:00:00.239) 0:00:09.340 ************ 2025-05-23 00:28:13.127946 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:28:13.129035 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:28:13.129079 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:28:13.129446 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:28:13.130748 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:28:13.130843 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:28:13.131480 | orchestrator | 2025-05-23 00:28:13.131824 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-05-23 00:28:13.132314 | orchestrator | Friday 23 May 2025 00:28:13 +0000 (0:00:01.062) 0:00:10.402 ************ 2025-05-23 00:28:13.192792 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:28:13.734712 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:28:13.735197 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:28:13.735681 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:28:13.736742 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:28:13.737083 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:28:13.746120 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:28:13.746145 | orchestrator | 2025-05-23 00:28:13.746159 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-05-23 00:28:13.746173 | orchestrator | Friday 23 May 2025 00:28:13 +0000 (0:00:00.607) 0:00:11.010 ************ 2025-05-23 00:28:13.831596 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:28:13.854734 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:28:13.876971 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:28:14.170655 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:28:14.170856 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:28:14.171925 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:28:14.175101 | orchestrator | ok: [testbed-manager] 2025-05-23 00:28:14.175129 | orchestrator | 2025-05-23 00:28:14.175143 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-05-23 00:28:14.175156 | orchestrator | Friday 23 May 2025 00:28:14 +0000 (0:00:00.436) 0:00:11.447 ************ 2025-05-23 00:28:14.239426 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:28:14.267365 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:28:14.286211 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:28:14.314218 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:28:14.372475 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:28:14.372767 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:28:14.373241 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:28:14.374251 | orchestrator | 2025-05-23 00:28:14.374530 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-05-23 00:28:14.375429 | orchestrator | Friday 23 May 2025 00:28:14 +0000 (0:00:00.202) 0:00:11.649 ************ 2025-05-23 00:28:14.699643 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:28:14.701349 | orchestrator | 2025-05-23 00:28:14.701383 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-05-23 00:28:14.701397 | orchestrator | Friday 23 May 2025 00:28:14 +0000 (0:00:00.326) 0:00:11.975 ************ 2025-05-23 00:28:14.976805 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:28:14.977476 | orchestrator | 2025-05-23 00:28:14.978501 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-05-23 00:28:14.981032 | orchestrator | Friday 23 May 2025 00:28:14 +0000 (0:00:00.275) 0:00:12.251 ************ 2025-05-23 00:28:16.125768 | orchestrator | ok: [testbed-manager] 2025-05-23 00:28:16.126978 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:28:16.128189 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:28:16.129147 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:28:16.129702 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:28:16.130743 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:28:16.131532 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:28:16.132186 | orchestrator | 2025-05-23 00:28:16.132717 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-05-23 00:28:16.133373 | orchestrator | Friday 23 May 2025 00:28:16 +0000 (0:00:01.150) 0:00:13.401 ************ 2025-05-23 00:28:16.191656 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:28:16.216011 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:28:16.236345 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:28:16.259932 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:28:16.312794 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:28:16.313411 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:28:16.315246 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:28:16.315297 | orchestrator | 2025-05-23 00:28:16.315317 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-05-23 00:28:16.315331 | orchestrator | Friday 23 May 2025 00:28:16 +0000 (0:00:00.185) 0:00:13.587 ************ 2025-05-23 00:28:16.839464 | orchestrator | ok: [testbed-manager] 2025-05-23 00:28:16.840596 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:28:16.841583 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:28:16.842675 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:28:16.843464 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:28:16.844523 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:28:16.845462 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:28:16.846486 | orchestrator | 2025-05-23 00:28:16.847356 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-05-23 00:28:16.848114 | orchestrator | Friday 23 May 2025 00:28:16 +0000 (0:00:00.527) 0:00:14.115 ************ 2025-05-23 00:28:16.941550 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:28:16.975076 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:28:17.027736 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:28:17.156866 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:28:17.156967 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:28:17.158418 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:28:17.158522 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:28:17.158615 | orchestrator | 2025-05-23 00:28:17.160883 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-05-23 00:28:17.160934 | orchestrator | Friday 23 May 2025 00:28:17 +0000 (0:00:00.317) 0:00:14.432 ************ 2025-05-23 00:28:17.738979 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:28:17.739242 | orchestrator | ok: [testbed-manager] 2025-05-23 00:28:17.740346 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:28:17.741086 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:28:17.741652 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:28:17.742110 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:28:17.742753 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:28:17.743349 | orchestrator | 2025-05-23 00:28:17.744132 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-05-23 00:28:17.744454 | orchestrator | Friday 23 May 2025 00:28:17 +0000 (0:00:00.582) 0:00:15.015 ************ 2025-05-23 00:28:18.804438 | orchestrator | ok: [testbed-manager] 2025-05-23 00:28:18.804561 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:28:18.804589 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:28:18.804666 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:28:18.804973 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:28:18.805350 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:28:18.805993 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:28:18.806599 | orchestrator | 2025-05-23 00:28:18.806887 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-05-23 00:28:18.807803 | orchestrator | Friday 23 May 2025 00:28:18 +0000 (0:00:01.064) 0:00:16.079 ************ 2025-05-23 00:28:19.912443 | orchestrator | ok: [testbed-manager] 2025-05-23 00:28:19.912550 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:28:19.912565 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:28:19.912704 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:28:19.913554 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:28:19.914491 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:28:19.915601 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:28:19.915953 | orchestrator | 2025-05-23 00:28:19.916650 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-05-23 00:28:19.917165 | orchestrator | Friday 23 May 2025 00:28:19 +0000 (0:00:01.105) 0:00:17.184 ************ 2025-05-23 00:28:20.183749 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:28:20.183927 | orchestrator | 2025-05-23 00:28:20.187249 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-05-23 00:28:20.187358 | orchestrator | Friday 23 May 2025 00:28:20 +0000 (0:00:00.274) 0:00:17.459 ************ 2025-05-23 00:28:20.248082 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:28:21.554857 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:28:21.555212 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:28:21.556069 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:28:21.559144 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:28:21.559171 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:28:21.559183 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:28:21.559195 | orchestrator | 2025-05-23 00:28:21.559248 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-23 00:28:21.561853 | orchestrator | Friday 23 May 2025 00:28:21 +0000 (0:00:01.371) 0:00:18.831 ************ 2025-05-23 00:28:21.625440 | orchestrator | ok: [testbed-manager] 2025-05-23 00:28:21.650326 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:28:21.673634 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:28:21.695317 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:28:21.745250 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:28:21.746374 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:28:21.749200 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:28:21.749230 | orchestrator | 2025-05-23 00:28:21.749244 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-23 00:28:21.749320 | orchestrator | Friday 23 May 2025 00:28:21 +0000 (0:00:00.191) 0:00:19.022 ************ 2025-05-23 00:28:21.843348 | orchestrator | ok: [testbed-manager] 2025-05-23 00:28:21.862776 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:28:21.894564 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:28:21.962882 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:28:21.963694 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:28:21.963999 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:28:21.966155 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:28:21.966804 | orchestrator | 2025-05-23 00:28:21.967693 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-23 00:28:21.968481 | orchestrator | Friday 23 May 2025 00:28:21 +0000 (0:00:00.217) 0:00:19.239 ************ 2025-05-23 00:28:22.033318 | orchestrator | ok: [testbed-manager] 2025-05-23 00:28:22.057528 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:28:22.076713 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:28:22.101458 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:28:22.161648 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:28:22.162451 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:28:22.166112 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:28:22.166419 | orchestrator | 2025-05-23 00:28:22.167422 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-23 00:28:22.168491 | orchestrator | Friday 23 May 2025 00:28:22 +0000 (0:00:00.199) 0:00:19.439 ************ 2025-05-23 00:28:22.414666 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:28:22.414874 | orchestrator | 2025-05-23 00:28:22.415466 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-23 00:28:22.418451 | orchestrator | Friday 23 May 2025 00:28:22 +0000 (0:00:00.250) 0:00:19.689 ************ 2025-05-23 00:28:22.953564 | orchestrator | ok: [testbed-manager] 2025-05-23 00:28:22.954316 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:28:22.954640 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:28:22.955926 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:28:22.956591 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:28:22.957514 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:28:22.957689 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:28:22.959670 | orchestrator | 2025-05-23 00:28:22.959762 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-23 00:28:22.959778 | orchestrator | Friday 23 May 2025 00:28:22 +0000 (0:00:00.540) 0:00:20.229 ************ 2025-05-23 00:28:23.034484 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:28:23.054226 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:28:23.078199 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:28:23.097092 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:28:23.174125 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:28:23.174983 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:28:23.175482 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:28:23.176505 | orchestrator | 2025-05-23 00:28:23.177524 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-23 00:28:23.178343 | orchestrator | Friday 23 May 2025 00:28:23 +0000 (0:00:00.221) 0:00:20.451 ************ 2025-05-23 00:28:24.198917 | orchestrator | changed: [testbed-manager] 2025-05-23 00:28:24.199026 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:28:24.199042 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:28:24.199851 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:28:24.201035 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:28:24.201888 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:28:24.203081 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:28:24.205160 | orchestrator | 2025-05-23 00:28:24.205190 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-23 00:28:24.205240 | orchestrator | Friday 23 May 2025 00:28:24 +0000 (0:00:01.020) 0:00:21.471 ************ 2025-05-23 00:28:24.732581 | orchestrator | ok: [testbed-manager] 2025-05-23 00:28:24.732815 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:28:24.732838 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:28:24.733467 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:28:24.733951 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:28:24.733981 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:28:24.734343 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:28:24.734721 | orchestrator | 2025-05-23 00:28:24.735038 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-23 00:28:24.735397 | orchestrator | Friday 23 May 2025 00:28:24 +0000 (0:00:00.536) 0:00:22.007 ************ 2025-05-23 00:28:25.829837 | orchestrator | ok: [testbed-manager] 2025-05-23 00:28:25.830405 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:28:25.831331 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:28:25.831717 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:28:25.833078 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:28:25.833774 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:28:25.834179 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:28:25.834994 | orchestrator | 2025-05-23 00:28:25.836596 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-23 00:28:25.836628 | orchestrator | Friday 23 May 2025 00:28:25 +0000 (0:00:01.096) 0:00:23.104 ************ 2025-05-23 00:28:39.212864 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:28:39.212988 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:28:39.213436 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:28:39.214423 | orchestrator | changed: [testbed-manager] 2025-05-23 00:28:39.215756 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:28:39.216714 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:28:39.217758 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:28:39.218698 | orchestrator | 2025-05-23 00:28:39.220055 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-05-23 00:28:39.220544 | orchestrator | Friday 23 May 2025 00:28:39 +0000 (0:00:13.380) 0:00:36.485 ************ 2025-05-23 00:28:39.276388 | orchestrator | ok: [testbed-manager] 2025-05-23 00:28:39.309761 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:28:39.335378 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:28:39.365300 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:28:39.429105 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:28:39.433107 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:28:39.434223 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:28:39.435364 | orchestrator | 2025-05-23 00:28:39.436581 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-05-23 00:28:39.437600 | orchestrator | Friday 23 May 2025 00:28:39 +0000 (0:00:00.220) 0:00:36.705 ************ 2025-05-23 00:28:39.498711 | orchestrator | ok: [testbed-manager] 2025-05-23 00:28:39.521445 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:28:39.546082 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:28:39.568186 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:28:39.630852 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:28:39.631714 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:28:39.631752 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:28:39.632405 | orchestrator | 2025-05-23 00:28:39.632680 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-05-23 00:28:39.634067 | orchestrator | Friday 23 May 2025 00:28:39 +0000 (0:00:00.202) 0:00:36.908 ************ 2025-05-23 00:28:39.704600 | orchestrator | ok: [testbed-manager] 2025-05-23 00:28:39.724813 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:28:39.752433 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:28:39.772374 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:28:39.826955 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:28:39.827992 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:28:39.828987 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:28:39.829789 | orchestrator | 2025-05-23 00:28:39.830713 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-05-23 00:28:39.831606 | orchestrator | Friday 23 May 2025 00:28:39 +0000 (0:00:00.195) 0:00:37.103 ************ 2025-05-23 00:28:40.118773 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:28:40.118870 | orchestrator | 2025-05-23 00:28:40.119664 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-05-23 00:28:40.126749 | orchestrator | Friday 23 May 2025 00:28:40 +0000 (0:00:00.290) 0:00:37.394 ************ 2025-05-23 00:28:41.736844 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:28:41.737441 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:28:41.740100 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:28:41.740148 | orchestrator | ok: [testbed-manager] 2025-05-23 00:28:41.740898 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:28:41.742528 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:28:41.742908 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:28:41.743633 | orchestrator | 2025-05-23 00:28:41.744476 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-05-23 00:28:41.745142 | orchestrator | Friday 23 May 2025 00:28:41 +0000 (0:00:01.616) 0:00:39.011 ************ 2025-05-23 00:28:42.816912 | orchestrator | changed: [testbed-manager] 2025-05-23 00:28:42.817018 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:28:42.817034 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:28:42.817047 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:28:42.817651 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:28:42.818210 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:28:42.821147 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:28:42.821200 | orchestrator | 2025-05-23 00:28:42.821217 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-05-23 00:28:42.821232 | orchestrator | Friday 23 May 2025 00:28:42 +0000 (0:00:01.079) 0:00:40.091 ************ 2025-05-23 00:28:43.636071 | orchestrator | ok: [testbed-manager] 2025-05-23 00:28:43.636698 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:28:43.636745 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:28:43.636980 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:28:43.638337 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:28:43.638805 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:28:43.640000 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:28:43.640759 | orchestrator | 2025-05-23 00:28:43.641415 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-05-23 00:28:43.642324 | orchestrator | Friday 23 May 2025 00:28:43 +0000 (0:00:00.820) 0:00:40.911 ************ 2025-05-23 00:28:43.923393 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:28:43.924518 | orchestrator | 2025-05-23 00:28:43.924938 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-05-23 00:28:43.925777 | orchestrator | Friday 23 May 2025 00:28:43 +0000 (0:00:00.286) 0:00:41.198 ************ 2025-05-23 00:28:44.969162 | orchestrator | changed: [testbed-manager] 2025-05-23 00:28:44.972135 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:28:44.972188 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:28:44.972202 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:28:44.972213 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:28:44.972692 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:28:44.973698 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:28:44.974611 | orchestrator | 2025-05-23 00:28:44.975016 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-05-23 00:28:44.975449 | orchestrator | Friday 23 May 2025 00:28:44 +0000 (0:00:01.045) 0:00:42.244 ************ 2025-05-23 00:28:45.071776 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:28:45.094313 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:28:45.117157 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:28:45.243528 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:28:45.244406 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:28:45.245140 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:28:45.246070 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:28:45.247477 | orchestrator | 2025-05-23 00:28:45.247503 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-05-23 00:28:45.247816 | orchestrator | Friday 23 May 2025 00:28:45 +0000 (0:00:00.276) 0:00:42.520 ************ 2025-05-23 00:28:56.179856 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:28:56.179980 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:28:56.180043 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:28:56.180057 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:28:56.180167 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:28:56.180470 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:28:56.182207 | orchestrator | changed: [testbed-manager] 2025-05-23 00:28:56.183702 | orchestrator | 2025-05-23 00:28:56.183773 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-05-23 00:28:56.183787 | orchestrator | Friday 23 May 2025 00:28:56 +0000 (0:00:10.932) 0:00:53.452 ************ 2025-05-23 00:28:57.648096 | orchestrator | ok: [testbed-manager] 2025-05-23 00:28:57.650362 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:28:57.650404 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:28:57.650416 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:28:57.651508 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:28:57.651828 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:28:57.653082 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:28:57.653835 | orchestrator | 2025-05-23 00:28:57.654846 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-05-23 00:28:57.654958 | orchestrator | Friday 23 May 2025 00:28:57 +0000 (0:00:01.470) 0:00:54.923 ************ 2025-05-23 00:28:58.500756 | orchestrator | ok: [testbed-manager] 2025-05-23 00:28:58.504364 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:28:58.504425 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:28:58.504438 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:28:58.504449 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:28:58.504465 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:28:58.504534 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:28:58.505178 | orchestrator | 2025-05-23 00:28:58.505774 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-05-23 00:28:58.506353 | orchestrator | Friday 23 May 2025 00:28:58 +0000 (0:00:00.852) 0:00:55.775 ************ 2025-05-23 00:28:58.573713 | orchestrator | ok: [testbed-manager] 2025-05-23 00:28:58.599417 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:28:58.622835 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:28:58.649521 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:28:58.718921 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:28:58.719148 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:28:58.720021 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:28:58.720215 | orchestrator | 2025-05-23 00:28:58.721232 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-05-23 00:28:58.721714 | orchestrator | Friday 23 May 2025 00:28:58 +0000 (0:00:00.217) 0:00:55.993 ************ 2025-05-23 00:28:58.790177 | orchestrator | ok: [testbed-manager] 2025-05-23 00:28:58.814152 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:28:58.838510 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:28:58.860755 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:28:58.937731 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:28:58.937882 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:28:58.938499 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:28:58.939186 | orchestrator | 2025-05-23 00:28:58.939703 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-05-23 00:28:58.943633 | orchestrator | Friday 23 May 2025 00:28:58 +0000 (0:00:00.220) 0:00:56.214 ************ 2025-05-23 00:28:59.236317 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:28:59.236524 | orchestrator | 2025-05-23 00:28:59.236900 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-05-23 00:28:59.237585 | orchestrator | Friday 23 May 2025 00:28:59 +0000 (0:00:00.297) 0:00:56.512 ************ 2025-05-23 00:29:00.828398 | orchestrator | ok: [testbed-manager] 2025-05-23 00:29:00.828611 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:29:00.831055 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:29:00.831091 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:29:00.831104 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:29:00.831482 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:29:00.833718 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:29:00.833754 | orchestrator | 2025-05-23 00:29:00.833767 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-05-23 00:29:00.833820 | orchestrator | Friday 23 May 2025 00:29:00 +0000 (0:00:01.590) 0:00:58.102 ************ 2025-05-23 00:29:01.379746 | orchestrator | changed: [testbed-manager] 2025-05-23 00:29:01.382947 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:29:01.385336 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:29:01.387234 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:29:01.388057 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:29:01.389033 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:29:01.390383 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:29:01.391171 | orchestrator | 2025-05-23 00:29:01.392329 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-05-23 00:29:01.393443 | orchestrator | Friday 23 May 2025 00:29:01 +0000 (0:00:00.551) 0:00:58.654 ************ 2025-05-23 00:29:01.457807 | orchestrator | ok: [testbed-manager] 2025-05-23 00:29:01.497086 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:29:01.516955 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:29:01.547378 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:29:01.598283 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:29:01.598357 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:29:01.598419 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:29:01.598704 | orchestrator | 2025-05-23 00:29:01.598924 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-05-23 00:29:01.599153 | orchestrator | Friday 23 May 2025 00:29:01 +0000 (0:00:00.218) 0:00:58.873 ************ 2025-05-23 00:29:02.750318 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:29:02.751724 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:29:02.752089 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:29:02.752935 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:29:02.753786 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:29:02.754546 | orchestrator | ok: [testbed-manager] 2025-05-23 00:29:02.755236 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:29:02.755995 | orchestrator | 2025-05-23 00:29:02.756418 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-05-23 00:29:02.756811 | orchestrator | Friday 23 May 2025 00:29:02 +0000 (0:00:01.147) 0:01:00.021 ************ 2025-05-23 00:29:04.435618 | orchestrator | changed: [testbed-manager] 2025-05-23 00:29:04.435793 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:29:04.436939 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:29:04.437687 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:29:04.438124 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:29:04.438592 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:29:04.439488 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:29:04.439833 | orchestrator | 2025-05-23 00:29:04.441304 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-05-23 00:29:04.441839 | orchestrator | Friday 23 May 2025 00:29:04 +0000 (0:00:01.689) 0:01:01.711 ************ 2025-05-23 00:29:06.722343 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:29:06.722520 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:29:06.724061 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:29:06.724415 | orchestrator | ok: [testbed-manager] 2025-05-23 00:29:06.726675 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:29:06.727154 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:29:06.730235 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:29:06.731138 | orchestrator | 2025-05-23 00:29:06.732195 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-05-23 00:29:06.732223 | orchestrator | Friday 23 May 2025 00:29:06 +0000 (0:00:02.281) 0:01:03.992 ************ 2025-05-23 00:29:43.141265 | orchestrator | ok: [testbed-manager] 2025-05-23 00:29:43.141427 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:29:43.141445 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:29:43.141494 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:29:43.141587 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:29:43.141603 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:29:43.142690 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:29:43.143270 | orchestrator | 2025-05-23 00:29:43.143914 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-05-23 00:29:43.144610 | orchestrator | Friday 23 May 2025 00:29:43 +0000 (0:00:36.418) 0:01:40.410 ************ 2025-05-23 00:31:06.935457 | orchestrator | changed: [testbed-manager] 2025-05-23 00:31:06.935569 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:31:06.935585 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:31:06.935596 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:31:06.935608 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:31:06.936091 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:31:06.936232 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:31:06.936467 | orchestrator | 2025-05-23 00:31:06.937002 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-05-23 00:31:06.937516 | orchestrator | Friday 23 May 2025 00:31:06 +0000 (0:01:23.799) 0:03:04.209 ************ 2025-05-23 00:31:08.459761 | orchestrator | ok: [testbed-manager] 2025-05-23 00:31:08.459866 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:31:08.459950 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:31:08.460530 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:31:08.461660 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:31:08.463033 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:31:08.463516 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:31:08.464270 | orchestrator | 2025-05-23 00:31:08.464739 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-05-23 00:31:08.465262 | orchestrator | Friday 23 May 2025 00:31:08 +0000 (0:00:01.524) 0:03:05.734 ************ 2025-05-23 00:31:20.040102 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:31:20.040279 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:31:20.041485 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:31:20.044223 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:31:20.044247 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:31:20.044259 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:31:20.044270 | orchestrator | changed: [testbed-manager] 2025-05-23 00:31:20.045070 | orchestrator | 2025-05-23 00:31:20.046000 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-05-23 00:31:20.046577 | orchestrator | Friday 23 May 2025 00:31:20 +0000 (0:00:11.575) 0:03:17.309 ************ 2025-05-23 00:31:20.366235 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-05-23 00:31:20.366471 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-05-23 00:31:20.367320 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-05-23 00:31:20.370444 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-05-23 00:31:20.371271 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-05-23 00:31:20.371915 | orchestrator | 2025-05-23 00:31:20.373001 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-05-23 00:31:20.373448 | orchestrator | Friday 23 May 2025 00:31:20 +0000 (0:00:00.332) 0:03:17.642 ************ 2025-05-23 00:31:20.421609 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-23 00:31:20.456426 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:31:20.457218 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-23 00:31:20.457404 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-23 00:31:20.478639 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:31:20.506570 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:31:20.509924 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-23 00:31:20.528086 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:31:21.076247 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-23 00:31:21.076783 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-23 00:31:21.078353 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-23 00:31:21.079842 | orchestrator | 2025-05-23 00:31:21.079972 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-05-23 00:31:21.080765 | orchestrator | Friday 23 May 2025 00:31:21 +0000 (0:00:00.709) 0:03:18.352 ************ 2025-05-23 00:31:21.117199 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-23 00:31:21.162468 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-23 00:31:21.162557 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-23 00:31:21.163351 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-23 00:31:21.163443 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-23 00:31:21.164543 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-23 00:31:21.164650 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-23 00:31:21.165100 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-23 00:31:21.165405 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-23 00:31:21.165828 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-23 00:31:21.166204 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-23 00:31:21.201339 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:31:21.201502 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-23 00:31:21.201948 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-23 00:31:21.202460 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-23 00:31:21.202723 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-23 00:31:21.203847 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-23 00:31:21.203870 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-23 00:31:21.203883 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-23 00:31:21.261611 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-23 00:31:21.261705 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-23 00:31:21.262059 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-23 00:31:21.262808 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-23 00:31:21.263132 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-23 00:31:21.264265 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-23 00:31:21.265860 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-23 00:31:21.265897 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-23 00:31:21.266378 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-23 00:31:21.266922 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-23 00:31:21.267250 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-23 00:31:21.267670 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-23 00:31:21.267901 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-23 00:31:21.268351 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-23 00:31:21.268771 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-23 00:31:21.269058 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-23 00:31:21.269534 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-23 00:31:21.269928 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-23 00:31:21.270208 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-23 00:31:21.272111 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-23 00:31:21.272136 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-23 00:31:21.292525 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-23 00:31:21.292894 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:31:21.319218 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:31:25.640408 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:31:25.640763 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-23 00:31:25.644722 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-23 00:31:25.645733 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-23 00:31:25.646977 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-23 00:31:25.647922 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-23 00:31:25.648884 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-23 00:31:25.650015 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-23 00:31:25.650823 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-23 00:31:25.651703 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-23 00:31:25.652522 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-23 00:31:25.653349 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-23 00:31:25.653955 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-23 00:31:25.654782 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-23 00:31:25.655617 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-23 00:31:25.656332 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-23 00:31:25.656958 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-23 00:31:25.657606 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-23 00:31:25.658077 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-23 00:31:25.658858 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-23 00:31:25.659264 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-23 00:31:25.659708 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-23 00:31:25.660321 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-23 00:31:25.661015 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-23 00:31:25.661408 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-23 00:31:25.661965 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-23 00:31:25.662490 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-23 00:31:25.662975 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-23 00:31:25.663416 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-23 00:31:25.663947 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-23 00:31:25.664375 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-23 00:31:25.664838 | orchestrator | 2025-05-23 00:31:25.665327 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-05-23 00:31:25.665863 | orchestrator | Friday 23 May 2025 00:31:25 +0000 (0:00:04.562) 0:03:22.914 ************ 2025-05-23 00:31:27.156514 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-23 00:31:27.158913 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-23 00:31:27.159726 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-23 00:31:27.160992 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-23 00:31:27.162252 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-23 00:31:27.163058 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-23 00:31:27.164217 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-23 00:31:27.164922 | orchestrator | 2025-05-23 00:31:27.165236 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-05-23 00:31:27.165959 | orchestrator | Friday 23 May 2025 00:31:27 +0000 (0:00:01.516) 0:03:24.431 ************ 2025-05-23 00:31:27.204718 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-23 00:31:27.228908 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:31:27.300128 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-23 00:31:27.627342 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:31:27.628082 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-23 00:31:27.628991 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:31:27.631206 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-23 00:31:27.631236 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:31:27.631294 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-23 00:31:27.632107 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-23 00:31:27.632289 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-23 00:31:27.632737 | orchestrator | 2025-05-23 00:31:27.633193 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-05-23 00:31:27.633521 | orchestrator | Friday 23 May 2025 00:31:27 +0000 (0:00:00.472) 0:03:24.903 ************ 2025-05-23 00:31:27.685052 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-23 00:31:27.707777 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:31:27.778926 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-23 00:31:28.153546 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:31:28.153868 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-23 00:31:28.155732 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:31:28.155835 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-23 00:31:28.156703 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:31:28.158389 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-23 00:31:28.158559 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-23 00:31:28.159558 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-23 00:31:28.159878 | orchestrator | 2025-05-23 00:31:28.160893 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-05-23 00:31:28.161034 | orchestrator | Friday 23 May 2025 00:31:28 +0000 (0:00:00.525) 0:03:25.429 ************ 2025-05-23 00:31:28.224716 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:31:28.246131 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:31:28.267358 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:31:28.290947 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:31:28.406063 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:31:28.406327 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:31:28.407278 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:31:28.408427 | orchestrator | 2025-05-23 00:31:28.409351 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-05-23 00:31:28.410080 | orchestrator | Friday 23 May 2025 00:31:28 +0000 (0:00:00.252) 0:03:25.681 ************ 2025-05-23 00:31:34.338540 | orchestrator | ok: [testbed-manager] 2025-05-23 00:31:34.338679 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:31:34.339097 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:31:34.339653 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:31:34.340693 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:31:34.342224 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:31:34.343059 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:31:34.343691 | orchestrator | 2025-05-23 00:31:34.344025 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-05-23 00:31:34.344867 | orchestrator | Friday 23 May 2025 00:31:34 +0000 (0:00:05.933) 0:03:31.615 ************ 2025-05-23 00:31:34.410606 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-05-23 00:31:34.412118 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-05-23 00:31:34.442776 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:31:34.488676 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:31:34.488809 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-05-23 00:31:34.489358 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-05-23 00:31:34.523843 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:31:34.523989 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-05-23 00:31:34.556618 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:31:34.557328 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-05-23 00:31:34.623430 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:31:34.623733 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:31:34.626969 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-05-23 00:31:34.626995 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:31:34.627007 | orchestrator | 2025-05-23 00:31:34.627020 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-05-23 00:31:34.627439 | orchestrator | Friday 23 May 2025 00:31:34 +0000 (0:00:00.284) 0:03:31.899 ************ 2025-05-23 00:31:35.581337 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-05-23 00:31:35.584574 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-05-23 00:31:35.585725 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-05-23 00:31:35.586973 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-05-23 00:31:35.587457 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-05-23 00:31:35.588327 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-05-23 00:31:35.589168 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-05-23 00:31:35.589672 | orchestrator | 2025-05-23 00:31:35.590373 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-05-23 00:31:35.590998 | orchestrator | Friday 23 May 2025 00:31:35 +0000 (0:00:00.956) 0:03:32.856 ************ 2025-05-23 00:31:35.971316 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:31:35.971853 | orchestrator | 2025-05-23 00:31:35.972175 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-05-23 00:31:35.972843 | orchestrator | Friday 23 May 2025 00:31:35 +0000 (0:00:00.390) 0:03:33.246 ************ 2025-05-23 00:31:37.390473 | orchestrator | ok: [testbed-manager] 2025-05-23 00:31:37.392186 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:31:37.392430 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:31:37.393667 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:31:37.394422 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:31:37.395329 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:31:37.395875 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:31:37.396487 | orchestrator | 2025-05-23 00:31:37.397075 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-05-23 00:31:37.397954 | orchestrator | Friday 23 May 2025 00:31:37 +0000 (0:00:01.419) 0:03:34.665 ************ 2025-05-23 00:31:38.767230 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:31:38.767350 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:31:38.768061 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:31:38.768207 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:31:38.768837 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:31:38.769794 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:31:38.770167 | orchestrator | ok: [testbed-manager] 2025-05-23 00:31:38.770929 | orchestrator | 2025-05-23 00:31:38.771290 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-05-23 00:31:38.772334 | orchestrator | Friday 23 May 2025 00:31:38 +0000 (0:00:01.376) 0:03:36.041 ************ 2025-05-23 00:31:39.446692 | orchestrator | changed: [testbed-manager] 2025-05-23 00:31:39.447474 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:31:39.448268 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:31:39.449653 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:31:39.450592 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:31:39.451509 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:31:39.452318 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:31:39.452807 | orchestrator | 2025-05-23 00:31:39.453364 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-05-23 00:31:39.454122 | orchestrator | Friday 23 May 2025 00:31:39 +0000 (0:00:00.680) 0:03:36.722 ************ 2025-05-23 00:31:40.019280 | orchestrator | ok: [testbed-manager] 2025-05-23 00:31:40.023075 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:31:40.023157 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:31:40.023171 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:31:40.023183 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:31:40.023194 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:31:40.023205 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:31:40.023217 | orchestrator | 2025-05-23 00:31:40.023424 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-05-23 00:31:40.025903 | orchestrator | Friday 23 May 2025 00:31:40 +0000 (0:00:00.571) 0:03:37.294 ************ 2025-05-23 00:31:40.930689 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747958634.1538613, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-23 00:31:40.931445 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747958680.960098, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-23 00:31:40.932092 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747958667.748355, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-23 00:31:40.933218 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747958666.919082, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-23 00:31:40.933725 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747958663.4255347, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-23 00:31:40.935031 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747958669.6157331, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-23 00:31:40.936256 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747958669.3993132, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-23 00:31:40.936677 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747958664.969011, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-23 00:31:40.937918 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747958587.676449, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-23 00:31:40.938558 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747958601.2618454, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-23 00:31:40.939065 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747958584.6727748, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-23 00:31:40.939844 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747958583.963585, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-23 00:31:40.940225 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747958588.1749382, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-23 00:31:40.940690 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747958586.3668654, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-23 00:31:40.941123 | orchestrator | 2025-05-23 00:31:40.941562 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-05-23 00:31:40.941974 | orchestrator | Friday 23 May 2025 00:31:40 +0000 (0:00:00.911) 0:03:38.206 ************ 2025-05-23 00:31:42.007679 | orchestrator | changed: [testbed-manager] 2025-05-23 00:31:42.008471 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:31:42.010218 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:31:42.011415 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:31:42.011737 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:31:42.012244 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:31:42.013006 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:31:42.013671 | orchestrator | 2025-05-23 00:31:42.014539 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-05-23 00:31:42.014890 | orchestrator | Friday 23 May 2025 00:31:42 +0000 (0:00:01.076) 0:03:39.283 ************ 2025-05-23 00:31:43.115300 | orchestrator | changed: [testbed-manager] 2025-05-23 00:31:43.115471 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:31:43.116269 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:31:43.117770 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:31:43.118879 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:31:43.119875 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:31:43.120707 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:31:43.121717 | orchestrator | 2025-05-23 00:31:43.122322 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-05-23 00:31:43.122992 | orchestrator | Friday 23 May 2025 00:31:43 +0000 (0:00:01.107) 0:03:40.390 ************ 2025-05-23 00:31:43.224261 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:31:43.258992 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:31:43.290830 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:31:43.323410 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:31:43.391877 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:31:43.392044 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:31:43.392567 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:31:43.393551 | orchestrator | 2025-05-23 00:31:43.395113 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-05-23 00:31:43.395195 | orchestrator | Friday 23 May 2025 00:31:43 +0000 (0:00:00.277) 0:03:40.668 ************ 2025-05-23 00:31:44.089819 | orchestrator | ok: [testbed-manager] 2025-05-23 00:31:44.090090 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:31:44.091819 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:31:44.092259 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:31:44.093185 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:31:44.094314 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:31:44.094774 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:31:44.095573 | orchestrator | 2025-05-23 00:31:44.096583 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-05-23 00:31:44.096955 | orchestrator | Friday 23 May 2025 00:31:44 +0000 (0:00:00.694) 0:03:41.363 ************ 2025-05-23 00:31:44.457845 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:31:44.459893 | orchestrator | 2025-05-23 00:31:44.462366 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-05-23 00:31:44.462924 | orchestrator | Friday 23 May 2025 00:31:44 +0000 (0:00:00.369) 0:03:41.732 ************ 2025-05-23 00:31:51.873450 | orchestrator | ok: [testbed-manager] 2025-05-23 00:31:51.873670 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:31:51.874159 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:31:51.875705 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:31:51.876285 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:31:51.876658 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:31:51.877593 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:31:51.877948 | orchestrator | 2025-05-23 00:31:51.878882 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-05-23 00:31:51.879286 | orchestrator | Friday 23 May 2025 00:31:51 +0000 (0:00:07.413) 0:03:49.146 ************ 2025-05-23 00:31:52.962686 | orchestrator | ok: [testbed-manager] 2025-05-23 00:31:52.962787 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:31:52.962803 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:31:52.963207 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:31:52.963879 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:31:52.964204 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:31:52.965187 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:31:52.966077 | orchestrator | 2025-05-23 00:31:52.966724 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-05-23 00:31:52.967260 | orchestrator | Friday 23 May 2025 00:31:52 +0000 (0:00:01.090) 0:03:50.236 ************ 2025-05-23 00:31:53.970571 | orchestrator | ok: [testbed-manager] 2025-05-23 00:31:53.970942 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:31:53.973214 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:31:53.975407 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:31:53.975448 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:31:53.975899 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:31:53.976591 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:31:53.977298 | orchestrator | 2025-05-23 00:31:53.977679 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-05-23 00:31:53.978306 | orchestrator | Friday 23 May 2025 00:31:53 +0000 (0:00:01.008) 0:03:51.244 ************ 2025-05-23 00:31:54.388354 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:31:54.388684 | orchestrator | 2025-05-23 00:31:54.389077 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-05-23 00:31:54.389731 | orchestrator | Friday 23 May 2025 00:31:54 +0000 (0:00:00.419) 0:03:51.664 ************ 2025-05-23 00:32:02.871702 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:32:02.871886 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:32:02.873404 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:32:02.873824 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:32:02.874661 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:32:02.875267 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:32:02.875776 | orchestrator | changed: [testbed-manager] 2025-05-23 00:32:02.875967 | orchestrator | 2025-05-23 00:32:02.876439 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-05-23 00:32:02.876925 | orchestrator | Friday 23 May 2025 00:32:02 +0000 (0:00:08.476) 0:04:00.141 ************ 2025-05-23 00:32:03.293267 | orchestrator | changed: [testbed-manager] 2025-05-23 00:32:03.329034 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:32:03.742314 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:32:03.744107 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:32:03.744170 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:32:03.744778 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:32:03.750756 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:32:03.750854 | orchestrator | 2025-05-23 00:32:03.750943 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-05-23 00:32:03.751558 | orchestrator | Friday 23 May 2025 00:32:03 +0000 (0:00:00.876) 0:04:01.017 ************ 2025-05-23 00:32:04.901244 | orchestrator | changed: [testbed-manager] 2025-05-23 00:32:04.901361 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:32:04.902753 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:32:04.904267 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:32:04.905243 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:32:04.906101 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:32:04.907252 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:32:04.907787 | orchestrator | 2025-05-23 00:32:04.908307 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-05-23 00:32:04.908985 | orchestrator | Friday 23 May 2025 00:32:04 +0000 (0:00:01.156) 0:04:02.174 ************ 2025-05-23 00:32:05.944360 | orchestrator | changed: [testbed-manager] 2025-05-23 00:32:05.944464 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:32:05.944970 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:32:05.945704 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:32:05.946497 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:32:05.947204 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:32:05.947925 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:32:05.948857 | orchestrator | 2025-05-23 00:32:05.950061 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-05-23 00:32:05.950223 | orchestrator | Friday 23 May 2025 00:32:05 +0000 (0:00:01.043) 0:04:03.217 ************ 2025-05-23 00:32:06.051182 | orchestrator | ok: [testbed-manager] 2025-05-23 00:32:06.084858 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:32:06.124455 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:32:06.178271 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:32:06.259451 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:32:06.260276 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:32:06.260770 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:32:06.261302 | orchestrator | 2025-05-23 00:32:06.261903 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-05-23 00:32:06.262507 | orchestrator | Friday 23 May 2025 00:32:06 +0000 (0:00:00.319) 0:04:03.537 ************ 2025-05-23 00:32:06.373605 | orchestrator | ok: [testbed-manager] 2025-05-23 00:32:06.427234 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:32:06.472393 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:32:06.517549 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:32:06.625949 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:32:06.626340 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:32:06.627059 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:32:06.628603 | orchestrator | 2025-05-23 00:32:06.629314 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-05-23 00:32:06.629946 | orchestrator | Friday 23 May 2025 00:32:06 +0000 (0:00:00.365) 0:04:03.903 ************ 2025-05-23 00:32:06.727411 | orchestrator | ok: [testbed-manager] 2025-05-23 00:32:06.763094 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:32:06.798234 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:32:06.849728 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:32:06.922007 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:32:06.923195 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:32:06.924338 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:32:06.926947 | orchestrator | 2025-05-23 00:32:06.927982 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-05-23 00:32:06.928583 | orchestrator | Friday 23 May 2025 00:32:06 +0000 (0:00:00.294) 0:04:04.197 ************ 2025-05-23 00:32:12.811527 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:32:12.811639 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:32:12.812574 | orchestrator | ok: [testbed-manager] 2025-05-23 00:32:12.812646 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:32:12.813094 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:32:12.813404 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:32:12.813696 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:32:12.813970 | orchestrator | 2025-05-23 00:32:12.814276 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-05-23 00:32:12.814966 | orchestrator | Friday 23 May 2025 00:32:12 +0000 (0:00:05.889) 0:04:10.087 ************ 2025-05-23 00:32:13.263696 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:32:13.264309 | orchestrator | 2025-05-23 00:32:13.264633 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-05-23 00:32:13.265980 | orchestrator | Friday 23 May 2025 00:32:13 +0000 (0:00:00.451) 0:04:10.538 ************ 2025-05-23 00:32:13.350063 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-05-23 00:32:13.350177 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-05-23 00:32:13.351260 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-05-23 00:32:13.413932 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:32:13.414093 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-05-23 00:32:13.414254 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-05-23 00:32:13.460516 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-05-23 00:32:13.460587 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:32:13.460912 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-05-23 00:32:13.460936 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-05-23 00:32:13.502542 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:32:13.502837 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-05-23 00:32:13.553507 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-05-23 00:32:13.553637 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:32:13.553663 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-05-23 00:32:13.553762 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-05-23 00:32:13.643760 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:32:13.645758 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:32:13.647085 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-05-23 00:32:13.647382 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-05-23 00:32:13.648292 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:32:13.648912 | orchestrator | 2025-05-23 00:32:13.649680 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-05-23 00:32:13.649896 | orchestrator | Friday 23 May 2025 00:32:13 +0000 (0:00:00.380) 0:04:10.919 ************ 2025-05-23 00:32:14.057249 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:32:14.059285 | orchestrator | 2025-05-23 00:32:14.059321 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-05-23 00:32:14.059335 | orchestrator | Friday 23 May 2025 00:32:14 +0000 (0:00:00.410) 0:04:11.330 ************ 2025-05-23 00:32:14.135327 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-05-23 00:32:14.135417 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-05-23 00:32:14.168461 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:32:14.168795 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-05-23 00:32:14.205350 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:32:14.205907 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-05-23 00:32:14.254939 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:32:14.255387 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-05-23 00:32:14.290284 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:32:14.353935 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-05-23 00:32:14.354896 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:32:14.354992 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:32:14.355893 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-05-23 00:32:14.356354 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:32:14.356964 | orchestrator | 2025-05-23 00:32:14.357520 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-05-23 00:32:14.358331 | orchestrator | Friday 23 May 2025 00:32:14 +0000 (0:00:00.298) 0:04:11.629 ************ 2025-05-23 00:32:14.732629 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:32:14.733548 | orchestrator | 2025-05-23 00:32:14.734259 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-05-23 00:32:14.734756 | orchestrator | Friday 23 May 2025 00:32:14 +0000 (0:00:00.379) 0:04:12.008 ************ 2025-05-23 00:32:48.229268 | orchestrator | changed: [testbed-manager] 2025-05-23 00:32:48.229650 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:32:48.230877 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:32:48.232851 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:32:48.233545 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:32:48.234178 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:32:48.235429 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:32:48.235939 | orchestrator | 2025-05-23 00:32:48.236282 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-05-23 00:32:48.236790 | orchestrator | Friday 23 May 2025 00:32:48 +0000 (0:00:33.493) 0:04:45.502 ************ 2025-05-23 00:32:56.128396 | orchestrator | changed: [testbed-manager] 2025-05-23 00:32:56.128584 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:32:56.129760 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:32:56.130125 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:32:56.130703 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:32:56.132370 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:32:56.133025 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:32:56.133412 | orchestrator | 2025-05-23 00:32:56.134436 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-05-23 00:32:56.134824 | orchestrator | Friday 23 May 2025 00:32:56 +0000 (0:00:07.902) 0:04:53.405 ************ 2025-05-23 00:33:03.816720 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:33:03.817741 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:33:03.818934 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:33:03.823319 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:33:03.823363 | orchestrator | changed: [testbed-manager] 2025-05-23 00:33:03.823385 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:33:03.823403 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:33:03.823416 | orchestrator | 2025-05-23 00:33:03.823427 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-05-23 00:33:03.823440 | orchestrator | Friday 23 May 2025 00:33:03 +0000 (0:00:07.686) 0:05:01.091 ************ 2025-05-23 00:33:05.436622 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:33:05.437567 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:33:05.437949 | orchestrator | ok: [testbed-manager] 2025-05-23 00:33:05.439407 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:33:05.440732 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:33:05.441249 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:33:05.441955 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:33:05.443164 | orchestrator | 2025-05-23 00:33:05.443192 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-05-23 00:33:05.443931 | orchestrator | Friday 23 May 2025 00:33:05 +0000 (0:00:01.621) 0:05:02.712 ************ 2025-05-23 00:33:11.137719 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:33:11.137836 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:33:11.138758 | orchestrator | changed: [testbed-manager] 2025-05-23 00:33:11.139001 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:33:11.139511 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:33:11.140297 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:33:11.142222 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:33:11.142433 | orchestrator | 2025-05-23 00:33:11.143044 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-05-23 00:33:11.143798 | orchestrator | Friday 23 May 2025 00:33:11 +0000 (0:00:05.698) 0:05:08.411 ************ 2025-05-23 00:33:11.550295 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:33:11.550691 | orchestrator | 2025-05-23 00:33:11.551419 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-05-23 00:33:11.552243 | orchestrator | Friday 23 May 2025 00:33:11 +0000 (0:00:00.415) 0:05:08.826 ************ 2025-05-23 00:33:12.284607 | orchestrator | changed: [testbed-manager] 2025-05-23 00:33:12.285082 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:33:12.287773 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:33:12.290843 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:33:12.290875 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:33:12.293111 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:33:12.295638 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:33:12.296536 | orchestrator | 2025-05-23 00:33:12.297642 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-05-23 00:33:12.298235 | orchestrator | Friday 23 May 2025 00:33:12 +0000 (0:00:00.731) 0:05:09.557 ************ 2025-05-23 00:33:13.906296 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:33:13.906515 | orchestrator | ok: [testbed-manager] 2025-05-23 00:33:13.907966 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:33:13.909286 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:33:13.909353 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:33:13.910740 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:33:13.912249 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:33:13.912338 | orchestrator | 2025-05-23 00:33:13.912991 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-05-23 00:33:13.913781 | orchestrator | Friday 23 May 2025 00:33:13 +0000 (0:00:01.622) 0:05:11.180 ************ 2025-05-23 00:33:14.732641 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:33:14.732739 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:33:14.732969 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:33:14.733243 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:33:14.733721 | orchestrator | changed: [testbed-manager] 2025-05-23 00:33:14.734775 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:33:14.734798 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:33:14.735539 | orchestrator | 2025-05-23 00:33:14.735831 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-05-23 00:33:14.737474 | orchestrator | Friday 23 May 2025 00:33:14 +0000 (0:00:00.827) 0:05:12.008 ************ 2025-05-23 00:33:14.825837 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:33:14.861128 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:33:14.898673 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:33:14.931394 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:33:14.965587 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:33:15.026323 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:33:15.026914 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:33:15.027739 | orchestrator | 2025-05-23 00:33:15.028488 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-05-23 00:33:15.029363 | orchestrator | Friday 23 May 2025 00:33:15 +0000 (0:00:00.294) 0:05:12.302 ************ 2025-05-23 00:33:15.096249 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:33:15.134295 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:33:15.164518 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:33:15.194297 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:33:15.230217 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:33:15.453232 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:33:15.454888 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:33:15.455115 | orchestrator | 2025-05-23 00:33:15.456378 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-05-23 00:33:15.457180 | orchestrator | Friday 23 May 2025 00:33:15 +0000 (0:00:00.425) 0:05:12.728 ************ 2025-05-23 00:33:15.552811 | orchestrator | ok: [testbed-manager] 2025-05-23 00:33:15.585710 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:33:15.642338 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:33:15.674110 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:33:15.758294 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:33:15.760164 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:33:15.760213 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:33:15.761375 | orchestrator | 2025-05-23 00:33:15.763007 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-05-23 00:33:15.763048 | orchestrator | Friday 23 May 2025 00:33:15 +0000 (0:00:00.304) 0:05:13.033 ************ 2025-05-23 00:33:15.824330 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:33:15.855354 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:33:15.890413 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:33:15.921557 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:33:15.950554 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:33:16.003201 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:33:16.004122 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:33:16.004711 | orchestrator | 2025-05-23 00:33:16.005400 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-05-23 00:33:16.005919 | orchestrator | Friday 23 May 2025 00:33:16 +0000 (0:00:00.247) 0:05:13.280 ************ 2025-05-23 00:33:16.113352 | orchestrator | ok: [testbed-manager] 2025-05-23 00:33:16.152949 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:33:16.187758 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:33:16.225936 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:33:16.290721 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:33:16.290941 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:33:16.291389 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:33:16.291909 | orchestrator | 2025-05-23 00:33:16.292630 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-05-23 00:33:16.293058 | orchestrator | Friday 23 May 2025 00:33:16 +0000 (0:00:00.286) 0:05:13.567 ************ 2025-05-23 00:33:16.354601 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:33:16.384344 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:33:16.415629 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:33:16.451273 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:33:16.532276 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:33:16.532399 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:33:16.532505 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:33:16.533154 | orchestrator | 2025-05-23 00:33:16.534776 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-05-23 00:33:16.534815 | orchestrator | Friday 23 May 2025 00:33:16 +0000 (0:00:00.241) 0:05:13.809 ************ 2025-05-23 00:33:16.621186 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:33:16.655493 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:33:16.686710 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:33:16.717577 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:33:16.746975 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:33:16.796986 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:33:16.797166 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:33:16.799080 | orchestrator | 2025-05-23 00:33:16.799158 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-05-23 00:33:16.800053 | orchestrator | Friday 23 May 2025 00:33:16 +0000 (0:00:00.264) 0:05:14.074 ************ 2025-05-23 00:33:17.269859 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:33:17.269962 | orchestrator | 2025-05-23 00:33:17.270310 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-05-23 00:33:17.271262 | orchestrator | Friday 23 May 2025 00:33:17 +0000 (0:00:00.470) 0:05:14.544 ************ 2025-05-23 00:33:18.098669 | orchestrator | ok: [testbed-manager] 2025-05-23 00:33:18.098791 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:33:18.098810 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:33:18.098822 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:33:18.098946 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:33:18.098963 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:33:18.099355 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:33:18.100591 | orchestrator | 2025-05-23 00:33:18.101754 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-05-23 00:33:18.103358 | orchestrator | Friday 23 May 2025 00:33:18 +0000 (0:00:00.825) 0:05:15.370 ************ 2025-05-23 00:33:20.881607 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:33:20.881725 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:33:20.882156 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:33:20.882463 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:33:20.883835 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:33:20.883993 | orchestrator | ok: [testbed-manager] 2025-05-23 00:33:20.884113 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:33:20.886845 | orchestrator | 2025-05-23 00:33:20.886882 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-05-23 00:33:20.886896 | orchestrator | Friday 23 May 2025 00:33:20 +0000 (0:00:02.787) 0:05:18.157 ************ 2025-05-23 00:33:20.964438 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-05-23 00:33:20.964612 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-05-23 00:33:20.965231 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-05-23 00:33:21.030153 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:33:21.105602 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-05-23 00:33:21.106485 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-05-23 00:33:21.107509 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-05-23 00:33:21.110117 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-05-23 00:33:21.111038 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-05-23 00:33:21.206263 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-05-23 00:33:21.209812 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:33:21.209917 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-05-23 00:33:21.211887 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-05-23 00:33:21.212368 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-05-23 00:33:21.276525 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:33:21.277218 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-05-23 00:33:21.277448 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-05-23 00:33:21.278688 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-05-23 00:33:21.344616 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:33:21.346143 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-05-23 00:33:21.346181 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-05-23 00:33:21.497894 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-05-23 00:33:21.497988 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:33:21.498430 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:33:21.499256 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-05-23 00:33:21.500611 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-05-23 00:33:21.500949 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-05-23 00:33:21.502255 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:33:21.502385 | orchestrator | 2025-05-23 00:33:21.502969 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-05-23 00:33:21.503322 | orchestrator | Friday 23 May 2025 00:33:21 +0000 (0:00:00.617) 0:05:18.774 ************ 2025-05-23 00:33:28.208211 | orchestrator | ok: [testbed-manager] 2025-05-23 00:33:28.208335 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:33:28.208354 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:33:28.208366 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:33:28.208377 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:33:28.208388 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:33:28.209149 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:33:28.209190 | orchestrator | 2025-05-23 00:33:28.209788 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-05-23 00:33:28.212725 | orchestrator | Friday 23 May 2025 00:33:28 +0000 (0:00:06.698) 0:05:25.473 ************ 2025-05-23 00:33:29.276339 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:33:29.280136 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:33:29.280175 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:33:29.280238 | orchestrator | ok: [testbed-manager] 2025-05-23 00:33:29.280538 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:33:29.282350 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:33:29.282375 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:33:29.282600 | orchestrator | 2025-05-23 00:33:29.283362 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-05-23 00:33:29.283910 | orchestrator | Friday 23 May 2025 00:33:29 +0000 (0:00:01.076) 0:05:26.550 ************ 2025-05-23 00:33:36.880250 | orchestrator | ok: [testbed-manager] 2025-05-23 00:33:36.880484 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:33:36.881227 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:33:36.882130 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:33:36.883939 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:33:36.884980 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:33:36.886113 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:33:36.886898 | orchestrator | 2025-05-23 00:33:36.887970 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-05-23 00:33:36.888812 | orchestrator | Friday 23 May 2025 00:33:36 +0000 (0:00:07.605) 0:05:34.156 ************ 2025-05-23 00:33:40.002720 | orchestrator | changed: [testbed-manager] 2025-05-23 00:33:40.003013 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:33:40.003402 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:33:40.003903 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:33:40.004595 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:33:40.005655 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:33:40.005772 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:33:40.008632 | orchestrator | 2025-05-23 00:33:40.008689 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-05-23 00:33:40.008803 | orchestrator | Friday 23 May 2025 00:33:39 +0000 (0:00:03.121) 0:05:37.277 ************ 2025-05-23 00:33:41.288366 | orchestrator | ok: [testbed-manager] 2025-05-23 00:33:41.288475 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:33:41.288992 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:33:41.294879 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:33:41.294935 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:33:41.294947 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:33:41.296000 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:33:41.296905 | orchestrator | 2025-05-23 00:33:41.298349 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-05-23 00:33:41.299035 | orchestrator | Friday 23 May 2025 00:33:41 +0000 (0:00:01.285) 0:05:38.562 ************ 2025-05-23 00:33:42.753235 | orchestrator | ok: [testbed-manager] 2025-05-23 00:33:42.753713 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:33:42.754176 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:33:42.754636 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:33:42.756380 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:33:42.757798 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:33:42.758619 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:33:42.758948 | orchestrator | 2025-05-23 00:33:42.760044 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-05-23 00:33:42.760700 | orchestrator | Friday 23 May 2025 00:33:42 +0000 (0:00:01.466) 0:05:40.029 ************ 2025-05-23 00:33:42.960209 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:33:43.024022 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:33:43.086215 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:33:43.154207 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:33:43.313761 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:33:43.314454 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:33:43.315177 | orchestrator | changed: [testbed-manager] 2025-05-23 00:33:43.315872 | orchestrator | 2025-05-23 00:33:43.316706 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-05-23 00:33:43.319858 | orchestrator | Friday 23 May 2025 00:33:43 +0000 (0:00:00.560) 0:05:40.589 ************ 2025-05-23 00:33:52.641545 | orchestrator | ok: [testbed-manager] 2025-05-23 00:33:52.642292 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:33:52.643116 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:33:52.643880 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:33:52.644379 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:33:52.645154 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:33:52.645706 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:33:52.647397 | orchestrator | 2025-05-23 00:33:52.647922 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-05-23 00:33:52.648675 | orchestrator | Friday 23 May 2025 00:33:52 +0000 (0:00:09.322) 0:05:49.912 ************ 2025-05-23 00:33:53.515589 | orchestrator | changed: [testbed-manager] 2025-05-23 00:33:53.516292 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:33:53.517243 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:33:53.518228 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:33:53.519563 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:33:53.519588 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:33:53.522253 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:33:53.522279 | orchestrator | 2025-05-23 00:33:53.522292 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-05-23 00:33:53.523034 | orchestrator | Friday 23 May 2025 00:33:53 +0000 (0:00:00.880) 0:05:50.792 ************ 2025-05-23 00:34:05.846895 | orchestrator | ok: [testbed-manager] 2025-05-23 00:34:05.848138 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:34:05.848174 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:34:05.848186 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:34:05.848197 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:34:05.849165 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:34:05.849459 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:34:05.850431 | orchestrator | 2025-05-23 00:34:05.851493 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-05-23 00:34:05.852311 | orchestrator | Friday 23 May 2025 00:34:05 +0000 (0:00:12.325) 0:06:03.118 ************ 2025-05-23 00:34:18.095271 | orchestrator | ok: [testbed-manager] 2025-05-23 00:34:18.095409 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:34:18.095433 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:34:18.095452 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:34:18.096249 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:34:18.096991 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:34:18.099030 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:34:18.099130 | orchestrator | 2025-05-23 00:34:18.099158 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-05-23 00:34:18.099211 | orchestrator | Friday 23 May 2025 00:34:18 +0000 (0:00:12.246) 0:06:15.365 ************ 2025-05-23 00:34:18.514307 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-05-23 00:34:18.587424 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-05-23 00:34:19.439378 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-05-23 00:34:19.439482 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-05-23 00:34:19.440215 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-05-23 00:34:19.441130 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-05-23 00:34:19.441870 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-05-23 00:34:19.442945 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-05-23 00:34:19.444870 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-05-23 00:34:19.444956 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-05-23 00:34:19.445039 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-05-23 00:34:19.445554 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-05-23 00:34:19.445985 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-05-23 00:34:19.446583 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-05-23 00:34:19.447034 | orchestrator | 2025-05-23 00:34:19.447559 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-05-23 00:34:19.448038 | orchestrator | Friday 23 May 2025 00:34:19 +0000 (0:00:01.349) 0:06:16.714 ************ 2025-05-23 00:34:19.559240 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:34:19.625331 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:34:19.685277 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:34:19.746142 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:34:19.820102 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:34:19.949508 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:34:19.949591 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:34:19.949605 | orchestrator | 2025-05-23 00:34:19.949619 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-05-23 00:34:19.949633 | orchestrator | Friday 23 May 2025 00:34:19 +0000 (0:00:00.507) 0:06:17.222 ************ 2025-05-23 00:34:23.607617 | orchestrator | ok: [testbed-manager] 2025-05-23 00:34:23.607726 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:34:23.607956 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:34:23.608676 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:34:23.609693 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:34:23.611197 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:34:23.611625 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:34:23.612634 | orchestrator | 2025-05-23 00:34:23.613972 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-05-23 00:34:23.614784 | orchestrator | Friday 23 May 2025 00:34:23 +0000 (0:00:03.658) 0:06:20.881 ************ 2025-05-23 00:34:23.770421 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:34:23.837595 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:34:23.902740 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:34:24.150784 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:34:24.210743 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:34:24.319236 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:34:24.320248 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:34:24.320646 | orchestrator | 2025-05-23 00:34:24.324209 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-05-23 00:34:24.324254 | orchestrator | Friday 23 May 2025 00:34:24 +0000 (0:00:00.712) 0:06:21.593 ************ 2025-05-23 00:34:24.396713 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-05-23 00:34:24.396866 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-05-23 00:34:24.467541 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:34:24.468413 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-05-23 00:34:24.470415 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-05-23 00:34:24.539528 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:34:24.539883 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-05-23 00:34:24.540842 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-05-23 00:34:24.614937 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:34:24.615358 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-05-23 00:34:24.618708 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-05-23 00:34:24.684005 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:34:24.684232 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-05-23 00:34:24.685107 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-05-23 00:34:24.756420 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:34:24.757474 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-05-23 00:34:24.760792 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-05-23 00:34:24.875465 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:34:24.875939 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-05-23 00:34:24.876754 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-05-23 00:34:24.877836 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:34:24.878188 | orchestrator | 2025-05-23 00:34:24.880667 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-05-23 00:34:24.880698 | orchestrator | Friday 23 May 2025 00:34:24 +0000 (0:00:00.558) 0:06:22.152 ************ 2025-05-23 00:34:25.001507 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:34:25.073396 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:34:25.132642 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:34:25.193129 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:34:25.259817 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:34:25.366916 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:34:25.372835 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:34:25.380227 | orchestrator | 2025-05-23 00:34:25.380895 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-05-23 00:34:25.381203 | orchestrator | Friday 23 May 2025 00:34:25 +0000 (0:00:00.487) 0:06:22.640 ************ 2025-05-23 00:34:25.511172 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:34:25.581431 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:34:25.647372 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:34:25.720807 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:34:25.786858 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:34:25.890770 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:34:25.892855 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:34:25.893684 | orchestrator | 2025-05-23 00:34:25.894281 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-05-23 00:34:25.897735 | orchestrator | Friday 23 May 2025 00:34:25 +0000 (0:00:00.525) 0:06:23.166 ************ 2025-05-23 00:34:26.034381 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:34:26.101474 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:34:26.203044 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:34:26.275190 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:34:26.343132 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:34:26.460201 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:34:26.461000 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:34:26.461594 | orchestrator | 2025-05-23 00:34:26.462531 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-05-23 00:34:26.463255 | orchestrator | Friday 23 May 2025 00:34:26 +0000 (0:00:00.570) 0:06:23.737 ************ 2025-05-23 00:34:32.535246 | orchestrator | ok: [testbed-manager] 2025-05-23 00:34:32.536350 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:34:32.536591 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:34:32.539504 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:34:32.540146 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:34:32.541867 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:34:32.541897 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:34:32.543534 | orchestrator | 2025-05-23 00:34:32.543568 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-05-23 00:34:32.543583 | orchestrator | Friday 23 May 2025 00:34:32 +0000 (0:00:06.068) 0:06:29.805 ************ 2025-05-23 00:34:33.359283 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:34:33.359453 | orchestrator | 2025-05-23 00:34:33.359825 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-05-23 00:34:33.360629 | orchestrator | Friday 23 May 2025 00:34:33 +0000 (0:00:00.829) 0:06:30.634 ************ 2025-05-23 00:34:33.759306 | orchestrator | ok: [testbed-manager] 2025-05-23 00:34:34.169262 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:34:34.169726 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:34:34.169891 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:34:34.171154 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:34:34.171708 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:34:34.174315 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:34:34.175689 | orchestrator | 2025-05-23 00:34:34.175728 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-05-23 00:34:34.175741 | orchestrator | Friday 23 May 2025 00:34:34 +0000 (0:00:00.809) 0:06:31.444 ************ 2025-05-23 00:34:34.561293 | orchestrator | ok: [testbed-manager] 2025-05-23 00:34:34.984052 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:34:34.985121 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:34:34.988122 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:34:34.988150 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:34:34.988162 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:34:34.988319 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:34:34.989597 | orchestrator | 2025-05-23 00:34:34.990265 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-05-23 00:34:34.991328 | orchestrator | Friday 23 May 2025 00:34:34 +0000 (0:00:00.816) 0:06:32.260 ************ 2025-05-23 00:34:36.482351 | orchestrator | ok: [testbed-manager] 2025-05-23 00:34:36.482594 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:34:36.483280 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:34:36.483752 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:34:36.484560 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:34:36.485731 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:34:36.486543 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:34:36.487268 | orchestrator | 2025-05-23 00:34:36.487998 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-05-23 00:34:36.490305 | orchestrator | Friday 23 May 2025 00:34:36 +0000 (0:00:01.498) 0:06:33.758 ************ 2025-05-23 00:34:36.610158 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:34:37.776507 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:34:37.776738 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:34:37.777286 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:34:37.777661 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:34:37.778414 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:34:37.779013 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:34:37.779716 | orchestrator | 2025-05-23 00:34:37.780000 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-05-23 00:34:37.780719 | orchestrator | Friday 23 May 2025 00:34:37 +0000 (0:00:01.290) 0:06:35.049 ************ 2025-05-23 00:34:39.061529 | orchestrator | ok: [testbed-manager] 2025-05-23 00:34:39.061703 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:34:39.061723 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:34:39.061829 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:34:39.062599 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:34:39.063639 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:34:39.064371 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:34:39.064777 | orchestrator | 2025-05-23 00:34:39.065512 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-05-23 00:34:39.066194 | orchestrator | Friday 23 May 2025 00:34:39 +0000 (0:00:01.286) 0:06:36.336 ************ 2025-05-23 00:34:40.370752 | orchestrator | changed: [testbed-manager] 2025-05-23 00:34:40.370935 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:34:40.371959 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:34:40.372687 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:34:40.376753 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:34:40.377317 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:34:40.377800 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:34:40.378268 | orchestrator | 2025-05-23 00:34:40.379138 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-05-23 00:34:40.379809 | orchestrator | Friday 23 May 2025 00:34:40 +0000 (0:00:01.307) 0:06:37.643 ************ 2025-05-23 00:34:41.362578 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:34:41.362686 | orchestrator | 2025-05-23 00:34:41.362702 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-05-23 00:34:41.363926 | orchestrator | Friday 23 May 2025 00:34:41 +0000 (0:00:00.989) 0:06:38.633 ************ 2025-05-23 00:34:42.661617 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:34:42.661783 | orchestrator | ok: [testbed-manager] 2025-05-23 00:34:42.661891 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:34:42.661908 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:34:42.662308 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:34:42.662877 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:34:42.663391 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:34:42.663762 | orchestrator | 2025-05-23 00:34:42.664147 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-05-23 00:34:42.664673 | orchestrator | Friday 23 May 2025 00:34:42 +0000 (0:00:01.298) 0:06:39.932 ************ 2025-05-23 00:34:43.802895 | orchestrator | ok: [testbed-manager] 2025-05-23 00:34:43.805768 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:34:43.807228 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:34:43.807867 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:34:43.809433 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:34:43.810482 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:34:43.810716 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:34:43.811505 | orchestrator | 2025-05-23 00:34:43.812168 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-05-23 00:34:43.812870 | orchestrator | Friday 23 May 2025 00:34:43 +0000 (0:00:01.143) 0:06:41.076 ************ 2025-05-23 00:34:44.920193 | orchestrator | ok: [testbed-manager] 2025-05-23 00:34:44.920558 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:34:44.921713 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:34:44.921792 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:34:44.923001 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:34:44.923236 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:34:44.923948 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:34:44.924424 | orchestrator | 2025-05-23 00:34:44.925538 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-05-23 00:34:44.925924 | orchestrator | Friday 23 May 2025 00:34:44 +0000 (0:00:01.119) 0:06:42.195 ************ 2025-05-23 00:34:46.205590 | orchestrator | ok: [testbed-manager] 2025-05-23 00:34:46.206524 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:34:46.208358 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:34:46.208682 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:34:46.210293 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:34:46.211629 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:34:46.212320 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:34:46.213014 | orchestrator | 2025-05-23 00:34:46.214511 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-05-23 00:34:46.215245 | orchestrator | Friday 23 May 2025 00:34:46 +0000 (0:00:01.283) 0:06:43.479 ************ 2025-05-23 00:34:47.341754 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:34:47.341889 | orchestrator | 2025-05-23 00:34:47.342158 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-23 00:34:47.342911 | orchestrator | Friday 23 May 2025 00:34:47 +0000 (0:00:00.844) 0:06:44.323 ************ 2025-05-23 00:34:47.343341 | orchestrator | 2025-05-23 00:34:47.343696 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-23 00:34:47.346944 | orchestrator | Friday 23 May 2025 00:34:47 +0000 (0:00:00.041) 0:06:44.364 ************ 2025-05-23 00:34:47.346961 | orchestrator | 2025-05-23 00:34:47.346971 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-23 00:34:47.346980 | orchestrator | Friday 23 May 2025 00:34:47 +0000 (0:00:00.037) 0:06:44.401 ************ 2025-05-23 00:34:47.346988 | orchestrator | 2025-05-23 00:34:47.346997 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-23 00:34:47.347006 | orchestrator | Friday 23 May 2025 00:34:47 +0000 (0:00:00.037) 0:06:44.439 ************ 2025-05-23 00:34:47.347872 | orchestrator | 2025-05-23 00:34:47.348491 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-23 00:34:47.348838 | orchestrator | Friday 23 May 2025 00:34:47 +0000 (0:00:00.059) 0:06:44.498 ************ 2025-05-23 00:34:47.349817 | orchestrator | 2025-05-23 00:34:47.350220 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-23 00:34:47.351710 | orchestrator | Friday 23 May 2025 00:34:47 +0000 (0:00:00.037) 0:06:44.535 ************ 2025-05-23 00:34:47.354106 | orchestrator | 2025-05-23 00:34:47.354127 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-23 00:34:47.354137 | orchestrator | Friday 23 May 2025 00:34:47 +0000 (0:00:00.037) 0:06:44.573 ************ 2025-05-23 00:34:47.354175 | orchestrator | 2025-05-23 00:34:47.354490 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-23 00:34:47.355321 | orchestrator | Friday 23 May 2025 00:34:47 +0000 (0:00:00.045) 0:06:44.618 ************ 2025-05-23 00:34:48.481503 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:34:48.488914 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:34:48.488979 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:34:48.491231 | orchestrator | 2025-05-23 00:34:48.494749 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-05-23 00:34:48.495976 | orchestrator | Friday 23 May 2025 00:34:48 +0000 (0:00:01.132) 0:06:45.750 ************ 2025-05-23 00:34:50.033256 | orchestrator | changed: [testbed-manager] 2025-05-23 00:34:50.033389 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:34:50.033408 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:34:50.033420 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:34:50.034909 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:34:50.034961 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:34:50.034976 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:34:50.035035 | orchestrator | 2025-05-23 00:34:50.035814 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-05-23 00:34:50.037701 | orchestrator | Friday 23 May 2025 00:34:50 +0000 (0:00:01.552) 0:06:47.303 ************ 2025-05-23 00:34:51.183147 | orchestrator | changed: [testbed-manager] 2025-05-23 00:34:51.183845 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:34:51.185859 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:34:51.186215 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:34:51.187216 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:34:51.187631 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:34:51.187931 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:34:51.188596 | orchestrator | 2025-05-23 00:34:51.188895 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-05-23 00:34:51.189424 | orchestrator | Friday 23 May 2025 00:34:51 +0000 (0:00:01.153) 0:06:48.456 ************ 2025-05-23 00:34:51.310284 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:34:53.233831 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:34:53.233954 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:34:53.234889 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:34:53.235162 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:34:53.236594 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:34:53.237560 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:34:53.238472 | orchestrator | 2025-05-23 00:34:53.239136 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-05-23 00:34:53.239668 | orchestrator | Friday 23 May 2025 00:34:53 +0000 (0:00:02.048) 0:06:50.505 ************ 2025-05-23 00:34:53.344016 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:34:53.344163 | orchestrator | 2025-05-23 00:34:53.344912 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-05-23 00:34:53.345699 | orchestrator | Friday 23 May 2025 00:34:53 +0000 (0:00:00.113) 0:06:50.618 ************ 2025-05-23 00:34:54.351726 | orchestrator | ok: [testbed-manager] 2025-05-23 00:34:54.352394 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:34:54.353726 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:34:54.353970 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:34:54.355309 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:34:54.355667 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:34:54.356155 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:34:54.356858 | orchestrator | 2025-05-23 00:34:54.357646 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-05-23 00:34:54.358111 | orchestrator | Friday 23 May 2025 00:34:54 +0000 (0:00:01.008) 0:06:51.626 ************ 2025-05-23 00:34:54.490675 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:34:54.566973 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:34:54.635026 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:34:54.703681 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:34:54.966564 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:34:55.100508 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:34:55.100715 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:34:55.102138 | orchestrator | 2025-05-23 00:34:55.102674 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-05-23 00:34:55.103416 | orchestrator | Friday 23 May 2025 00:34:55 +0000 (0:00:00.749) 0:06:52.375 ************ 2025-05-23 00:34:55.966408 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:34:55.966793 | orchestrator | 2025-05-23 00:34:55.970294 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-05-23 00:34:55.971223 | orchestrator | Friday 23 May 2025 00:34:55 +0000 (0:00:00.863) 0:06:53.239 ************ 2025-05-23 00:34:56.812130 | orchestrator | ok: [testbed-manager] 2025-05-23 00:34:56.812958 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:34:56.814648 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:34:56.815250 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:34:56.816941 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:34:56.816980 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:34:56.817665 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:34:56.818452 | orchestrator | 2025-05-23 00:34:56.819938 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-05-23 00:34:56.820172 | orchestrator | Friday 23 May 2025 00:34:56 +0000 (0:00:00.846) 0:06:54.086 ************ 2025-05-23 00:34:59.506687 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-05-23 00:34:59.506903 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-05-23 00:34:59.508338 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-05-23 00:34:59.512685 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-05-23 00:34:59.513523 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-05-23 00:34:59.513839 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-05-23 00:34:59.514681 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-05-23 00:34:59.515107 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-05-23 00:34:59.516170 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-05-23 00:34:59.516701 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-05-23 00:34:59.517326 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-05-23 00:34:59.518276 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-05-23 00:34:59.522392 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-05-23 00:34:59.522481 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-05-23 00:34:59.523467 | orchestrator | 2025-05-23 00:34:59.523818 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-05-23 00:34:59.524374 | orchestrator | Friday 23 May 2025 00:34:59 +0000 (0:00:02.693) 0:06:56.780 ************ 2025-05-23 00:34:59.643815 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:34:59.723969 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:34:59.790685 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:34:59.857995 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:34:59.938384 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:35:00.042214 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:35:00.042728 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:35:00.043779 | orchestrator | 2025-05-23 00:35:00.044959 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-05-23 00:35:00.045365 | orchestrator | Friday 23 May 2025 00:35:00 +0000 (0:00:00.536) 0:06:57.316 ************ 2025-05-23 00:35:00.884892 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:35:00.885029 | orchestrator | 2025-05-23 00:35:00.885047 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-05-23 00:35:00.885739 | orchestrator | Friday 23 May 2025 00:35:00 +0000 (0:00:00.839) 0:06:58.155 ************ 2025-05-23 00:35:01.337650 | orchestrator | ok: [testbed-manager] 2025-05-23 00:35:01.795029 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:35:01.796180 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:35:01.797204 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:35:01.797903 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:35:01.798782 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:35:01.799591 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:35:01.800402 | orchestrator | 2025-05-23 00:35:01.800777 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-05-23 00:35:01.801800 | orchestrator | Friday 23 May 2025 00:35:01 +0000 (0:00:00.912) 0:06:59.068 ************ 2025-05-23 00:35:02.280170 | orchestrator | ok: [testbed-manager] 2025-05-23 00:35:02.351637 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:35:02.905224 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:35:02.905760 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:35:02.907232 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:35:02.908725 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:35:02.909563 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:35:02.910551 | orchestrator | 2025-05-23 00:35:02.911770 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-05-23 00:35:02.912119 | orchestrator | Friday 23 May 2025 00:35:02 +0000 (0:00:01.110) 0:07:00.178 ************ 2025-05-23 00:35:03.047316 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:35:03.106325 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:35:03.171898 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:35:03.232778 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:35:03.292158 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:35:03.380538 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:35:03.381026 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:35:03.382418 | orchestrator | 2025-05-23 00:35:03.383123 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-05-23 00:35:03.384106 | orchestrator | Friday 23 May 2025 00:35:03 +0000 (0:00:00.478) 0:07:00.656 ************ 2025-05-23 00:35:04.775540 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:35:04.778237 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:35:04.778286 | orchestrator | ok: [testbed-manager] 2025-05-23 00:35:04.778298 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:35:04.778310 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:35:04.778619 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:35:04.778640 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:35:04.779900 | orchestrator | 2025-05-23 00:35:04.780803 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-05-23 00:35:04.781193 | orchestrator | Friday 23 May 2025 00:35:04 +0000 (0:00:01.392) 0:07:02.049 ************ 2025-05-23 00:35:04.934847 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:35:05.005015 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:35:05.075203 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:35:05.150721 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:35:05.224273 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:35:05.326570 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:35:05.327235 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:35:05.328259 | orchestrator | 2025-05-23 00:35:05.328916 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-05-23 00:35:05.329935 | orchestrator | Friday 23 May 2025 00:35:05 +0000 (0:00:00.552) 0:07:02.601 ************ 2025-05-23 00:35:07.250985 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:35:07.251138 | orchestrator | ok: [testbed-manager] 2025-05-23 00:35:07.254351 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:35:07.254395 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:35:07.254623 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:35:07.254989 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:35:07.255634 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:35:07.255972 | orchestrator | 2025-05-23 00:35:07.256547 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-05-23 00:35:07.257190 | orchestrator | Friday 23 May 2025 00:35:07 +0000 (0:00:01.919) 0:07:04.521 ************ 2025-05-23 00:35:08.574301 | orchestrator | ok: [testbed-manager] 2025-05-23 00:35:08.574698 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:35:08.576206 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:35:08.576783 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:35:08.577973 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:35:08.578720 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:35:08.579678 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:35:08.580658 | orchestrator | 2025-05-23 00:35:08.581379 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-05-23 00:35:08.582232 | orchestrator | Friday 23 May 2025 00:35:08 +0000 (0:00:01.325) 0:07:05.847 ************ 2025-05-23 00:35:10.364183 | orchestrator | ok: [testbed-manager] 2025-05-23 00:35:10.364491 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:35:10.365950 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:35:10.367441 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:35:10.369009 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:35:10.369245 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:35:10.369989 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:35:10.370874 | orchestrator | 2025-05-23 00:35:10.371623 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-05-23 00:35:10.372509 | orchestrator | Friday 23 May 2025 00:35:10 +0000 (0:00:01.789) 0:07:07.636 ************ 2025-05-23 00:35:11.986432 | orchestrator | ok: [testbed-manager] 2025-05-23 00:35:11.988028 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:35:11.988695 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:35:11.989992 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:35:11.991195 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:35:11.992005 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:35:11.993050 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:35:11.993848 | orchestrator | 2025-05-23 00:35:11.994492 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-23 00:35:11.995160 | orchestrator | Friday 23 May 2025 00:35:11 +0000 (0:00:01.625) 0:07:09.262 ************ 2025-05-23 00:35:12.598628 | orchestrator | ok: [testbed-manager] 2025-05-23 00:35:13.018459 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:35:13.018901 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:35:13.020564 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:35:13.021585 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:35:13.023485 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:35:13.023685 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:35:13.024710 | orchestrator | 2025-05-23 00:35:13.025752 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-23 00:35:13.026676 | orchestrator | Friday 23 May 2025 00:35:13 +0000 (0:00:01.031) 0:07:10.293 ************ 2025-05-23 00:35:13.150372 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:35:13.213945 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:35:13.283148 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:35:13.346761 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:35:13.411441 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:35:13.798770 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:35:13.799679 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:35:13.800116 | orchestrator | 2025-05-23 00:35:13.801239 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-05-23 00:35:13.801499 | orchestrator | Friday 23 May 2025 00:35:13 +0000 (0:00:00.781) 0:07:11.074 ************ 2025-05-23 00:35:13.935879 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:35:13.997849 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:35:14.065600 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:35:14.137806 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:35:14.202505 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:35:14.311055 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:35:14.313026 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:35:14.313962 | orchestrator | 2025-05-23 00:35:14.314839 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-05-23 00:35:14.316005 | orchestrator | Friday 23 May 2025 00:35:14 +0000 (0:00:00.511) 0:07:11.586 ************ 2025-05-23 00:35:14.461021 | orchestrator | ok: [testbed-manager] 2025-05-23 00:35:14.527847 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:35:14.609774 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:35:14.675118 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:35:14.740259 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:35:14.849678 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:35:14.849898 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:35:14.851024 | orchestrator | 2025-05-23 00:35:14.851534 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-05-23 00:35:14.852468 | orchestrator | Friday 23 May 2025 00:35:14 +0000 (0:00:00.540) 0:07:12.127 ************ 2025-05-23 00:35:14.984927 | orchestrator | ok: [testbed-manager] 2025-05-23 00:35:15.055755 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:35:15.284330 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:35:15.350436 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:35:15.417338 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:35:15.536503 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:35:15.536883 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:35:15.537811 | orchestrator | 2025-05-23 00:35:15.539387 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-05-23 00:35:15.540265 | orchestrator | Friday 23 May 2025 00:35:15 +0000 (0:00:00.684) 0:07:12.811 ************ 2025-05-23 00:35:15.665363 | orchestrator | ok: [testbed-manager] 2025-05-23 00:35:15.737744 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:35:15.799623 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:35:15.865668 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:35:15.935447 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:35:16.045026 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:35:16.045229 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:35:16.046194 | orchestrator | 2025-05-23 00:35:16.046971 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-05-23 00:35:16.047699 | orchestrator | Friday 23 May 2025 00:35:16 +0000 (0:00:00.507) 0:07:13.319 ************ 2025-05-23 00:35:21.894252 | orchestrator | ok: [testbed-manager] 2025-05-23 00:35:21.894336 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:35:21.895582 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:35:21.895595 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:35:21.896267 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:35:21.896717 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:35:21.897254 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:35:21.898909 | orchestrator | 2025-05-23 00:35:21.900872 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-05-23 00:35:21.901480 | orchestrator | Friday 23 May 2025 00:35:21 +0000 (0:00:05.850) 0:07:19.170 ************ 2025-05-23 00:35:22.035740 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:35:22.103023 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:35:22.174933 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:35:22.238365 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:35:22.301083 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:35:22.435640 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:35:22.438664 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:35:22.438763 | orchestrator | 2025-05-23 00:35:22.438868 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-05-23 00:35:22.445703 | orchestrator | Friday 23 May 2025 00:35:22 +0000 (0:00:00.541) 0:07:19.712 ************ 2025-05-23 00:35:23.517084 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:35:23.518135 | orchestrator | 2025-05-23 00:35:23.519335 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-05-23 00:35:23.523140 | orchestrator | Friday 23 May 2025 00:35:23 +0000 (0:00:01.078) 0:07:20.790 ************ 2025-05-23 00:35:25.295708 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:35:25.295782 | orchestrator | ok: [testbed-manager] 2025-05-23 00:35:25.296772 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:35:25.297601 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:35:25.300060 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:35:25.300075 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:35:25.300871 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:35:25.301668 | orchestrator | 2025-05-23 00:35:25.302143 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-05-23 00:35:25.302839 | orchestrator | Friday 23 May 2025 00:35:25 +0000 (0:00:01.778) 0:07:22.569 ************ 2025-05-23 00:35:26.404966 | orchestrator | ok: [testbed-manager] 2025-05-23 00:35:26.405343 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:35:26.406228 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:35:26.407217 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:35:26.408036 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:35:26.409029 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:35:26.409383 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:35:26.410347 | orchestrator | 2025-05-23 00:35:26.410875 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-05-23 00:35:26.411408 | orchestrator | Friday 23 May 2025 00:35:26 +0000 (0:00:01.108) 0:07:23.677 ************ 2025-05-23 00:35:26.830599 | orchestrator | ok: [testbed-manager] 2025-05-23 00:35:27.262560 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:35:27.262669 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:35:27.262683 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:35:27.263671 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:35:27.263870 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:35:27.264632 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:35:27.264895 | orchestrator | 2025-05-23 00:35:27.265361 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-05-23 00:35:27.266178 | orchestrator | Friday 23 May 2025 00:35:27 +0000 (0:00:00.862) 0:07:24.540 ************ 2025-05-23 00:35:29.162778 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-23 00:35:29.163395 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-23 00:35:29.165402 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-23 00:35:29.168473 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-23 00:35:29.168515 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-23 00:35:29.168529 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-23 00:35:29.170896 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-23 00:35:29.170921 | orchestrator | 2025-05-23 00:35:29.171367 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-05-23 00:35:29.172094 | orchestrator | Friday 23 May 2025 00:35:29 +0000 (0:00:01.896) 0:07:26.437 ************ 2025-05-23 00:35:29.925622 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:35:29.925754 | orchestrator | 2025-05-23 00:35:29.926084 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-05-23 00:35:29.926873 | orchestrator | Friday 23 May 2025 00:35:29 +0000 (0:00:00.765) 0:07:27.202 ************ 2025-05-23 00:35:38.428546 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:35:38.429024 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:35:38.430009 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:35:38.431255 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:35:38.432133 | orchestrator | changed: [testbed-manager] 2025-05-23 00:35:38.432932 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:35:38.434542 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:35:38.435345 | orchestrator | 2025-05-23 00:35:38.436844 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-05-23 00:35:38.436983 | orchestrator | Friday 23 May 2025 00:35:38 +0000 (0:00:08.500) 0:07:35.702 ************ 2025-05-23 00:35:40.310267 | orchestrator | ok: [testbed-manager] 2025-05-23 00:35:40.310907 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:35:40.312610 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:35:40.312654 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:35:40.313234 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:35:40.314248 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:35:40.317768 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:35:40.317814 | orchestrator | 2025-05-23 00:35:40.317831 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-05-23 00:35:40.317846 | orchestrator | Friday 23 May 2025 00:35:40 +0000 (0:00:01.881) 0:07:37.584 ************ 2025-05-23 00:35:41.562329 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:35:41.562553 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:35:41.563596 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:35:41.564298 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:35:41.566706 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:35:41.567429 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:35:41.568069 | orchestrator | 2025-05-23 00:35:41.568360 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-05-23 00:35:41.569004 | orchestrator | Friday 23 May 2025 00:35:41 +0000 (0:00:01.254) 0:07:38.838 ************ 2025-05-23 00:35:42.906726 | orchestrator | changed: [testbed-manager] 2025-05-23 00:35:42.907359 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:35:42.908285 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:35:42.911595 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:35:42.911633 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:35:42.911645 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:35:42.911657 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:35:42.911670 | orchestrator | 2025-05-23 00:35:42.913735 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-05-23 00:35:42.913758 | orchestrator | 2025-05-23 00:35:42.913771 | orchestrator | TASK [Include hardening role] ************************************************** 2025-05-23 00:35:42.913782 | orchestrator | Friday 23 May 2025 00:35:42 +0000 (0:00:01.344) 0:07:40.182 ************ 2025-05-23 00:35:43.038389 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:35:43.104767 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:35:43.163590 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:35:43.221897 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:35:43.287874 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:35:43.401583 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:35:43.402308 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:35:43.402731 | orchestrator | 2025-05-23 00:35:43.403561 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-05-23 00:35:43.404377 | orchestrator | 2025-05-23 00:35:43.407905 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-05-23 00:35:43.408223 | orchestrator | Friday 23 May 2025 00:35:43 +0000 (0:00:00.493) 0:07:40.676 ************ 2025-05-23 00:35:44.707255 | orchestrator | changed: [testbed-manager] 2025-05-23 00:35:44.707946 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:35:44.708840 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:35:44.709683 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:35:44.710677 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:35:44.711408 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:35:44.712133 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:35:44.712776 | orchestrator | 2025-05-23 00:35:44.713285 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-05-23 00:35:44.713805 | orchestrator | Friday 23 May 2025 00:35:44 +0000 (0:00:01.303) 0:07:41.980 ************ 2025-05-23 00:35:46.211456 | orchestrator | ok: [testbed-manager] 2025-05-23 00:35:46.211800 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:35:46.212300 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:35:46.212900 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:35:46.213645 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:35:46.217119 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:35:46.217257 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:35:46.217285 | orchestrator | 2025-05-23 00:35:46.217306 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-05-23 00:35:46.217321 | orchestrator | Friday 23 May 2025 00:35:46 +0000 (0:00:01.506) 0:07:43.487 ************ 2025-05-23 00:35:46.354259 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:35:46.421456 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:35:46.487675 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:35:46.767746 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:35:46.826913 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:35:47.263444 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:35:47.263583 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:35:47.264176 | orchestrator | 2025-05-23 00:35:47.265135 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-05-23 00:35:47.265829 | orchestrator | Friday 23 May 2025 00:35:47 +0000 (0:00:01.051) 0:07:44.538 ************ 2025-05-23 00:35:48.528247 | orchestrator | changed: [testbed-manager] 2025-05-23 00:35:48.528971 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:35:48.529709 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:35:48.530686 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:35:48.531306 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:35:48.533215 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:35:48.533234 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:35:48.533243 | orchestrator | 2025-05-23 00:35:48.533253 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-05-23 00:35:48.533454 | orchestrator | 2025-05-23 00:35:48.534429 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-05-23 00:35:48.534661 | orchestrator | Friday 23 May 2025 00:35:48 +0000 (0:00:01.263) 0:07:45.802 ************ 2025-05-23 00:35:49.375608 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:35:49.376739 | orchestrator | 2025-05-23 00:35:49.378260 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-05-23 00:35:49.379027 | orchestrator | Friday 23 May 2025 00:35:49 +0000 (0:00:00.847) 0:07:46.649 ************ 2025-05-23 00:35:49.773936 | orchestrator | ok: [testbed-manager] 2025-05-23 00:35:49.840523 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:35:50.381431 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:35:50.381526 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:35:50.382219 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:35:50.383568 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:35:50.385375 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:35:50.385850 | orchestrator | 2025-05-23 00:35:50.386528 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-05-23 00:35:50.386817 | orchestrator | Friday 23 May 2025 00:35:50 +0000 (0:00:01.005) 0:07:47.655 ************ 2025-05-23 00:35:51.482126 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:35:51.482550 | orchestrator | changed: [testbed-manager] 2025-05-23 00:35:51.483783 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:35:51.487531 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:35:51.487558 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:35:51.487572 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:35:51.487590 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:35:51.487602 | orchestrator | 2025-05-23 00:35:51.487615 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-05-23 00:35:51.487789 | orchestrator | Friday 23 May 2025 00:35:51 +0000 (0:00:01.100) 0:07:48.756 ************ 2025-05-23 00:35:52.406204 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:35:52.406306 | orchestrator | 2025-05-23 00:35:52.407299 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-05-23 00:35:52.408026 | orchestrator | Friday 23 May 2025 00:35:52 +0000 (0:00:00.921) 0:07:49.677 ************ 2025-05-23 00:35:53.208427 | orchestrator | ok: [testbed-manager] 2025-05-23 00:35:53.209279 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:35:53.211229 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:35:53.211949 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:35:53.212846 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:35:53.213881 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:35:53.214739 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:35:53.216241 | orchestrator | 2025-05-23 00:35:53.216454 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-05-23 00:35:53.217627 | orchestrator | Friday 23 May 2025 00:35:53 +0000 (0:00:00.802) 0:07:50.480 ************ 2025-05-23 00:35:54.295238 | orchestrator | changed: [testbed-manager] 2025-05-23 00:35:54.295518 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:35:54.297002 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:35:54.298703 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:35:54.299572 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:35:54.301041 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:35:54.301974 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:35:54.302734 | orchestrator | 2025-05-23 00:35:54.304462 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 00:35:54.304515 | orchestrator | 2025-05-23 00:35:54 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-23 00:35:54.304571 | orchestrator | 2025-05-23 00:35:54 | INFO  | Please wait and do not abort execution. 2025-05-23 00:35:54.305367 | orchestrator | testbed-manager : ok=160  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-05-23 00:35:54.306322 | orchestrator | testbed-node-0 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-23 00:35:54.306902 | orchestrator | testbed-node-1 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-23 00:35:54.308057 | orchestrator | testbed-node-2 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-23 00:35:54.308916 | orchestrator | testbed-node-3 : ok=167  changed=62  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-05-23 00:35:54.309372 | orchestrator | testbed-node-4 : ok=167  changed=62  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-23 00:35:54.310200 | orchestrator | testbed-node-5 : ok=167  changed=62  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-23 00:35:54.311029 | orchestrator | 2025-05-23 00:35:54.311762 | orchestrator | Friday 23 May 2025 00:35:54 +0000 (0:00:01.087) 0:07:51.568 ************ 2025-05-23 00:35:54.312625 | orchestrator | =============================================================================== 2025-05-23 00:35:54.312719 | orchestrator | osism.commons.packages : Install required packages --------------------- 83.80s 2025-05-23 00:35:54.313586 | orchestrator | osism.commons.packages : Download required packages -------------------- 36.42s 2025-05-23 00:35:54.314550 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.49s 2025-05-23 00:35:54.315062 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.38s 2025-05-23 00:35:54.315933 | orchestrator | osism.services.docker : Install docker-cli package --------------------- 12.33s 2025-05-23 00:35:54.316721 | orchestrator | osism.services.docker : Install docker package ------------------------- 12.25s 2025-05-23 00:35:54.317360 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.58s 2025-05-23 00:35:54.317951 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 10.93s 2025-05-23 00:35:54.318631 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.32s 2025-05-23 00:35:54.319438 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.50s 2025-05-23 00:35:54.319861 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.48s 2025-05-23 00:35:54.320581 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.90s 2025-05-23 00:35:54.321106 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.69s 2025-05-23 00:35:54.321798 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.61s 2025-05-23 00:35:54.322213 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.41s 2025-05-23 00:35:54.322922 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.70s 2025-05-23 00:35:54.323681 | orchestrator | osism.services.docker : Ensure that some packages are not installed ----- 6.07s 2025-05-23 00:35:54.324024 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.93s 2025-05-23 00:35:54.324766 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.89s 2025-05-23 00:35:54.325218 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.85s 2025-05-23 00:35:54.960428 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-05-23 00:35:54.960534 | orchestrator | + osism apply network 2025-05-23 00:35:56.746658 | orchestrator | 2025-05-23 00:35:56 | INFO  | Task 0e6a2408-043d-4afd-8884-984d2d21c04c (network) was prepared for execution. 2025-05-23 00:35:56.746774 | orchestrator | 2025-05-23 00:35:56 | INFO  | It takes a moment until task 0e6a2408-043d-4afd-8884-984d2d21c04c (network) has been started and output is visible here. 2025-05-23 00:36:00.082672 | orchestrator | 2025-05-23 00:36:00.083343 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-05-23 00:36:00.084076 | orchestrator | 2025-05-23 00:36:00.085104 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-05-23 00:36:00.086390 | orchestrator | Friday 23 May 2025 00:36:00 +0000 (0:00:00.224) 0:00:00.224 ************ 2025-05-23 00:36:00.255992 | orchestrator | ok: [testbed-manager] 2025-05-23 00:36:00.338423 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:36:00.416326 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:36:00.495137 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:36:00.574914 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:36:00.851554 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:36:00.852673 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:36:00.853787 | orchestrator | 2025-05-23 00:36:00.854493 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-05-23 00:36:00.855340 | orchestrator | Friday 23 May 2025 00:36:00 +0000 (0:00:00.768) 0:00:00.992 ************ 2025-05-23 00:36:02.118887 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 00:36:02.119367 | orchestrator | 2025-05-23 00:36:02.119727 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-05-23 00:36:02.120524 | orchestrator | Friday 23 May 2025 00:36:02 +0000 (0:00:01.266) 0:00:02.258 ************ 2025-05-23 00:36:04.052291 | orchestrator | ok: [testbed-manager] 2025-05-23 00:36:04.054343 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:36:04.054379 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:36:04.055445 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:36:04.055797 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:36:04.056899 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:36:04.057327 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:36:04.059111 | orchestrator | 2025-05-23 00:36:04.059130 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-05-23 00:36:04.059140 | orchestrator | Friday 23 May 2025 00:36:04 +0000 (0:00:01.931) 0:00:04.190 ************ 2025-05-23 00:36:05.724106 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:36:05.724959 | orchestrator | ok: [testbed-manager] 2025-05-23 00:36:05.726135 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:36:05.727535 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:36:05.728350 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:36:05.729229 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:36:05.730276 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:36:05.730995 | orchestrator | 2025-05-23 00:36:05.732022 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-05-23 00:36:05.732778 | orchestrator | Friday 23 May 2025 00:36:05 +0000 (0:00:01.674) 0:00:05.864 ************ 2025-05-23 00:36:06.217732 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-05-23 00:36:06.836978 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-05-23 00:36:06.838391 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-05-23 00:36:06.839327 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-05-23 00:36:06.842954 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-05-23 00:36:06.843716 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-05-23 00:36:06.844329 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-05-23 00:36:06.845364 | orchestrator | 2025-05-23 00:36:06.846333 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-05-23 00:36:06.847488 | orchestrator | Friday 23 May 2025 00:36:06 +0000 (0:00:01.112) 0:00:06.977 ************ 2025-05-23 00:36:08.545141 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-23 00:36:08.545511 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-23 00:36:08.546640 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-23 00:36:08.550843 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-23 00:36:08.551370 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-23 00:36:08.551927 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-23 00:36:08.552649 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-23 00:36:08.553316 | orchestrator | 2025-05-23 00:36:08.554121 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-05-23 00:36:08.554533 | orchestrator | Friday 23 May 2025 00:36:08 +0000 (0:00:01.710) 0:00:08.688 ************ 2025-05-23 00:36:10.160465 | orchestrator | changed: [testbed-manager] 2025-05-23 00:36:10.162455 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:36:10.162493 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:36:10.162505 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:36:10.162517 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:36:10.162528 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:36:10.162776 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:36:10.163236 | orchestrator | 2025-05-23 00:36:10.163925 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-05-23 00:36:10.163949 | orchestrator | Friday 23 May 2025 00:36:10 +0000 (0:00:01.608) 0:00:10.297 ************ 2025-05-23 00:36:10.602597 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-23 00:36:11.105972 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-23 00:36:11.106868 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-23 00:36:11.107951 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-23 00:36:11.111651 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-23 00:36:11.111690 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-23 00:36:11.111702 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-23 00:36:11.111714 | orchestrator | 2025-05-23 00:36:11.111726 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-05-23 00:36:11.111739 | orchestrator | Friday 23 May 2025 00:36:11 +0000 (0:00:00.953) 0:00:11.250 ************ 2025-05-23 00:36:11.524352 | orchestrator | ok: [testbed-manager] 2025-05-23 00:36:11.628400 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:36:12.214149 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:36:12.214826 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:36:12.215038 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:36:12.216020 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:36:12.216434 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:36:12.217086 | orchestrator | 2025-05-23 00:36:12.217713 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-05-23 00:36:12.218440 | orchestrator | Friday 23 May 2025 00:36:12 +0000 (0:00:01.103) 0:00:12.354 ************ 2025-05-23 00:36:12.385302 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:36:12.472060 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:36:12.557516 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:36:12.638096 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:36:12.720448 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:36:13.091397 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:36:13.091891 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:36:13.091920 | orchestrator | 2025-05-23 00:36:13.092263 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-05-23 00:36:13.093411 | orchestrator | Friday 23 May 2025 00:36:13 +0000 (0:00:00.878) 0:00:13.232 ************ 2025-05-23 00:36:15.045122 | orchestrator | ok: [testbed-manager] 2025-05-23 00:36:15.046660 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:36:15.046695 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:36:15.049009 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:36:15.052128 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:36:15.053038 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:36:15.053586 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:36:15.054283 | orchestrator | 2025-05-23 00:36:15.054883 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-05-23 00:36:15.055287 | orchestrator | Friday 23 May 2025 00:36:15 +0000 (0:00:01.956) 0:00:15.188 ************ 2025-05-23 00:36:16.863202 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-05-23 00:36:16.863363 | orchestrator | changed: [testbed-node-0] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-23 00:36:16.864686 | orchestrator | changed: [testbed-node-1] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-23 00:36:16.868558 | orchestrator | changed: [testbed-node-2] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-23 00:36:16.868656 | orchestrator | changed: [testbed-node-3] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-23 00:36:16.868672 | orchestrator | changed: [testbed-node-4] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-23 00:36:16.868747 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-23 00:36:16.869433 | orchestrator | changed: [testbed-node-5] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-23 00:36:16.870662 | orchestrator | 2025-05-23 00:36:16.871938 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-05-23 00:36:16.872999 | orchestrator | Friday 23 May 2025 00:36:16 +0000 (0:00:01.813) 0:00:17.001 ************ 2025-05-23 00:36:18.383031 | orchestrator | ok: [testbed-manager] 2025-05-23 00:36:18.383283 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:36:18.383514 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:36:18.387087 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:36:18.387115 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:36:18.387127 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:36:18.388980 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:36:18.389932 | orchestrator | 2025-05-23 00:36:18.390299 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-05-23 00:36:18.391090 | orchestrator | Friday 23 May 2025 00:36:18 +0000 (0:00:01.524) 0:00:18.525 ************ 2025-05-23 00:36:19.884734 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 00:36:19.885448 | orchestrator | 2025-05-23 00:36:19.886377 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-05-23 00:36:19.887325 | orchestrator | Friday 23 May 2025 00:36:19 +0000 (0:00:01.498) 0:00:20.024 ************ 2025-05-23 00:36:20.463555 | orchestrator | ok: [testbed-manager] 2025-05-23 00:36:20.910311 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:36:20.911075 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:36:20.913594 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:36:20.913622 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:36:20.916191 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:36:20.916328 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:36:20.916353 | orchestrator | 2025-05-23 00:36:20.916374 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-05-23 00:36:20.916390 | orchestrator | Friday 23 May 2025 00:36:20 +0000 (0:00:01.029) 0:00:21.053 ************ 2025-05-23 00:36:21.072879 | orchestrator | ok: [testbed-manager] 2025-05-23 00:36:21.158680 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:36:21.376634 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:36:21.459652 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:36:21.547545 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:36:21.688991 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:36:21.689485 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:36:21.690857 | orchestrator | 2025-05-23 00:36:21.690881 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-05-23 00:36:21.691732 | orchestrator | Friday 23 May 2025 00:36:21 +0000 (0:00:00.776) 0:00:21.830 ************ 2025-05-23 00:36:22.097629 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-23 00:36:22.097861 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-05-23 00:36:22.206799 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-23 00:36:22.207654 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-05-23 00:36:22.706270 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-23 00:36:22.707492 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-05-23 00:36:22.708220 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-23 00:36:22.711040 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-05-23 00:36:22.711077 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-23 00:36:22.711511 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-05-23 00:36:22.712732 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-23 00:36:22.713693 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-05-23 00:36:22.714472 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-23 00:36:22.715422 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-05-23 00:36:22.716492 | orchestrator | 2025-05-23 00:36:22.716721 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-05-23 00:36:22.717384 | orchestrator | Friday 23 May 2025 00:36:22 +0000 (0:00:01.018) 0:00:22.849 ************ 2025-05-23 00:36:23.089703 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:36:23.175391 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:36:23.265700 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:36:23.351146 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:36:23.437582 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:36:24.592166 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:36:24.592409 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:36:24.592640 | orchestrator | 2025-05-23 00:36:24.594904 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-05-23 00:36:24.595698 | orchestrator | Friday 23 May 2025 00:36:24 +0000 (0:00:01.882) 0:00:24.731 ************ 2025-05-23 00:36:24.750866 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:36:24.831765 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:36:25.077528 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:36:25.160585 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:36:25.253136 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:36:25.300622 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:36:25.301414 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:36:25.302579 | orchestrator | 2025-05-23 00:36:25.303667 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 00:36:25.304651 | orchestrator | 2025-05-23 00:36:25 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-23 00:36:25.304676 | orchestrator | 2025-05-23 00:36:25 | INFO  | Please wait and do not abort execution. 2025-05-23 00:36:25.306060 | orchestrator | testbed-manager : ok=16  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-23 00:36:25.307409 | orchestrator | testbed-node-0 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-23 00:36:25.308265 | orchestrator | testbed-node-1 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-23 00:36:25.309366 | orchestrator | testbed-node-2 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-23 00:36:25.310323 | orchestrator | testbed-node-3 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-23 00:36:25.311344 | orchestrator | testbed-node-4 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-23 00:36:25.312121 | orchestrator | testbed-node-5 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-23 00:36:25.312841 | orchestrator | 2025-05-23 00:36:25.313691 | orchestrator | Friday 23 May 2025 00:36:25 +0000 (0:00:00.711) 0:00:25.443 ************ 2025-05-23 00:36:25.314103 | orchestrator | =============================================================================== 2025-05-23 00:36:25.314716 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 1.96s 2025-05-23 00:36:25.315118 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.93s 2025-05-23 00:36:25.315508 | orchestrator | osism.commons.network : Include dummy interfaces ------------------------ 1.88s 2025-05-23 00:36:25.316222 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 1.81s 2025-05-23 00:36:25.316586 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 1.71s 2025-05-23 00:36:25.316942 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.67s 2025-05-23 00:36:25.317334 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.61s 2025-05-23 00:36:25.317687 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.52s 2025-05-23 00:36:25.318113 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.50s 2025-05-23 00:36:25.318619 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.27s 2025-05-23 00:36:25.319005 | orchestrator | osism.commons.network : Create required directories --------------------- 1.11s 2025-05-23 00:36:25.319332 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.10s 2025-05-23 00:36:25.319747 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.03s 2025-05-23 00:36:25.320356 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.02s 2025-05-23 00:36:25.320583 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 0.95s 2025-05-23 00:36:25.321051 | orchestrator | osism.commons.network : Copy interfaces file ---------------------------- 0.88s 2025-05-23 00:36:25.321619 | orchestrator | osism.commons.network : Set network_configured_files fact --------------- 0.78s 2025-05-23 00:36:25.322075 | orchestrator | osism.commons.network : Gather variables for each operating system ------ 0.77s 2025-05-23 00:36:25.322266 | orchestrator | osism.commons.network : Netplan configuration changed ------------------- 0.71s 2025-05-23 00:36:25.863973 | orchestrator | + osism apply wireguard 2025-05-23 00:36:27.241126 | orchestrator | 2025-05-23 00:36:27 | INFO  | Task 4d402985-49b8-460f-8515-812dfb90c2f2 (wireguard) was prepared for execution. 2025-05-23 00:36:27.241330 | orchestrator | 2025-05-23 00:36:27 | INFO  | It takes a moment until task 4d402985-49b8-460f-8515-812dfb90c2f2 (wireguard) has been started and output is visible here. 2025-05-23 00:36:30.266325 | orchestrator | 2025-05-23 00:36:30.266549 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-05-23 00:36:30.266847 | orchestrator | 2025-05-23 00:36:30.267299 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-05-23 00:36:30.268518 | orchestrator | Friday 23 May 2025 00:36:30 +0000 (0:00:00.158) 0:00:00.158 ************ 2025-05-23 00:36:31.647204 | orchestrator | ok: [testbed-manager] 2025-05-23 00:36:31.648153 | orchestrator | 2025-05-23 00:36:31.648184 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-05-23 00:36:31.648292 | orchestrator | Friday 23 May 2025 00:36:31 +0000 (0:00:01.386) 0:00:01.544 ************ 2025-05-23 00:36:37.938923 | orchestrator | changed: [testbed-manager] 2025-05-23 00:36:37.939070 | orchestrator | 2025-05-23 00:36:37.939532 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-05-23 00:36:37.940416 | orchestrator | Friday 23 May 2025 00:36:37 +0000 (0:00:06.291) 0:00:07.836 ************ 2025-05-23 00:36:38.457876 | orchestrator | changed: [testbed-manager] 2025-05-23 00:36:38.458883 | orchestrator | 2025-05-23 00:36:38.458916 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-05-23 00:36:38.459222 | orchestrator | Friday 23 May 2025 00:36:38 +0000 (0:00:00.520) 0:00:08.357 ************ 2025-05-23 00:36:38.879799 | orchestrator | changed: [testbed-manager] 2025-05-23 00:36:38.880977 | orchestrator | 2025-05-23 00:36:38.881811 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-05-23 00:36:38.882702 | orchestrator | Friday 23 May 2025 00:36:38 +0000 (0:00:00.421) 0:00:08.778 ************ 2025-05-23 00:36:39.403488 | orchestrator | ok: [testbed-manager] 2025-05-23 00:36:39.403578 | orchestrator | 2025-05-23 00:36:39.405216 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-05-23 00:36:39.405242 | orchestrator | Friday 23 May 2025 00:36:39 +0000 (0:00:00.521) 0:00:09.300 ************ 2025-05-23 00:36:39.965933 | orchestrator | ok: [testbed-manager] 2025-05-23 00:36:39.966244 | orchestrator | 2025-05-23 00:36:39.968141 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-05-23 00:36:39.969220 | orchestrator | Friday 23 May 2025 00:36:39 +0000 (0:00:00.563) 0:00:09.863 ************ 2025-05-23 00:36:40.368890 | orchestrator | ok: [testbed-manager] 2025-05-23 00:36:40.371139 | orchestrator | 2025-05-23 00:36:40.371995 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-05-23 00:36:40.373206 | orchestrator | Friday 23 May 2025 00:36:40 +0000 (0:00:00.404) 0:00:10.268 ************ 2025-05-23 00:36:41.579884 | orchestrator | changed: [testbed-manager] 2025-05-23 00:36:41.581663 | orchestrator | 2025-05-23 00:36:41.581715 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-05-23 00:36:41.582400 | orchestrator | Friday 23 May 2025 00:36:41 +0000 (0:00:01.209) 0:00:11.477 ************ 2025-05-23 00:36:42.518821 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-23 00:36:42.519423 | orchestrator | changed: [testbed-manager] 2025-05-23 00:36:42.520627 | orchestrator | 2025-05-23 00:36:42.521304 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-05-23 00:36:42.522127 | orchestrator | Friday 23 May 2025 00:36:42 +0000 (0:00:00.938) 0:00:12.416 ************ 2025-05-23 00:36:44.165331 | orchestrator | changed: [testbed-manager] 2025-05-23 00:36:44.166475 | orchestrator | 2025-05-23 00:36:44.167482 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-05-23 00:36:44.169021 | orchestrator | Friday 23 May 2025 00:36:44 +0000 (0:00:01.645) 0:00:14.061 ************ 2025-05-23 00:36:45.044253 | orchestrator | changed: [testbed-manager] 2025-05-23 00:36:45.045427 | orchestrator | 2025-05-23 00:36:45.045467 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 00:36:45.045504 | orchestrator | 2025-05-23 00:36:45 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-23 00:36:45.045519 | orchestrator | 2025-05-23 00:36:45 | INFO  | Please wait and do not abort execution. 2025-05-23 00:36:45.045773 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 00:36:45.046327 | orchestrator | 2025-05-23 00:36:45.046758 | orchestrator | Friday 23 May 2025 00:36:45 +0000 (0:00:00.880) 0:00:14.942 ************ 2025-05-23 00:36:45.047200 | orchestrator | =============================================================================== 2025-05-23 00:36:45.048500 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.29s 2025-05-23 00:36:45.049144 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.65s 2025-05-23 00:36:45.049664 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.39s 2025-05-23 00:36:45.050469 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.21s 2025-05-23 00:36:45.051144 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.94s 2025-05-23 00:36:45.052010 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.88s 2025-05-23 00:36:45.052812 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.56s 2025-05-23 00:36:45.053313 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.52s 2025-05-23 00:36:45.054115 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.52s 2025-05-23 00:36:45.054558 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.42s 2025-05-23 00:36:45.055228 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.40s 2025-05-23 00:36:45.609607 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-05-23 00:36:45.650602 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-05-23 00:36:45.650689 | orchestrator | Dload Upload Total Spent Left Speed 2025-05-23 00:36:45.726787 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 183 0 --:--:-- --:--:-- --:--:-- 184 2025-05-23 00:36:45.744349 | orchestrator | + osism apply --environment custom workarounds 2025-05-23 00:36:47.131738 | orchestrator | 2025-05-23 00:36:47 | INFO  | Trying to run play workarounds in environment custom 2025-05-23 00:36:47.183488 | orchestrator | 2025-05-23 00:36:47 | INFO  | Task 1fb731dc-72d4-4e7e-a1ca-6fef7e0e90b7 (workarounds) was prepared for execution. 2025-05-23 00:36:47.183563 | orchestrator | 2025-05-23 00:36:47 | INFO  | It takes a moment until task 1fb731dc-72d4-4e7e-a1ca-6fef7e0e90b7 (workarounds) has been started and output is visible here. 2025-05-23 00:36:50.241924 | orchestrator | 2025-05-23 00:36:50.245255 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-23 00:36:50.245445 | orchestrator | 2025-05-23 00:36:50.245467 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-05-23 00:36:50.247082 | orchestrator | Friday 23 May 2025 00:36:50 +0000 (0:00:00.134) 0:00:00.134 ************ 2025-05-23 00:36:50.406631 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-05-23 00:36:50.515778 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-05-23 00:36:50.600015 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-05-23 00:36:50.684706 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-05-23 00:36:50.767476 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-05-23 00:36:51.013104 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-05-23 00:36:51.013196 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-05-23 00:36:51.013836 | orchestrator | 2025-05-23 00:36:51.014203 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-05-23 00:36:51.014502 | orchestrator | 2025-05-23 00:36:51.015305 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-05-23 00:36:51.016480 | orchestrator | Friday 23 May 2025 00:36:51 +0000 (0:00:00.771) 0:00:00.906 ************ 2025-05-23 00:36:53.528993 | orchestrator | ok: [testbed-manager] 2025-05-23 00:36:53.529232 | orchestrator | 2025-05-23 00:36:53.530123 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-05-23 00:36:53.532251 | orchestrator | 2025-05-23 00:36:53.534547 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-05-23 00:36:53.535757 | orchestrator | Friday 23 May 2025 00:36:53 +0000 (0:00:02.510) 0:00:03.416 ************ 2025-05-23 00:36:55.301685 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:36:55.301862 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:36:55.302957 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:36:55.306613 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:36:55.309131 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:36:55.309223 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:36:55.309732 | orchestrator | 2025-05-23 00:36:55.311326 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-05-23 00:36:55.312090 | orchestrator | 2025-05-23 00:36:55.314364 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-05-23 00:36:55.314799 | orchestrator | Friday 23 May 2025 00:36:55 +0000 (0:00:01.776) 0:00:05.193 ************ 2025-05-23 00:36:56.710935 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-23 00:36:56.711589 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-23 00:36:56.713483 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-23 00:36:56.715043 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-23 00:36:56.715377 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-23 00:36:56.716276 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-23 00:36:56.718165 | orchestrator | 2025-05-23 00:36:56.718647 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-05-23 00:36:56.723692 | orchestrator | Friday 23 May 2025 00:36:56 +0000 (0:00:01.408) 0:00:06.601 ************ 2025-05-23 00:37:00.591949 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:37:00.593109 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:37:00.593854 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:37:00.594560 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:37:00.596083 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:37:00.597808 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:37:00.598826 | orchestrator | 2025-05-23 00:37:00.599979 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-05-23 00:37:00.600944 | orchestrator | Friday 23 May 2025 00:37:00 +0000 (0:00:03.884) 0:00:10.486 ************ 2025-05-23 00:37:00.806878 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:37:00.893387 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:37:00.974272 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:37:01.230383 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:37:01.376478 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:37:01.377160 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:37:01.378097 | orchestrator | 2025-05-23 00:37:01.378919 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-05-23 00:37:01.382089 | orchestrator | 2025-05-23 00:37:01.382922 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-05-23 00:37:01.383647 | orchestrator | Friday 23 May 2025 00:37:01 +0000 (0:00:00.782) 0:00:11.269 ************ 2025-05-23 00:37:03.057835 | orchestrator | changed: [testbed-manager] 2025-05-23 00:37:03.057949 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:37:03.057965 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:37:03.058089 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:37:03.058102 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:37:03.058237 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:37:03.059054 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:37:03.059502 | orchestrator | 2025-05-23 00:37:03.060187 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-05-23 00:37:03.060642 | orchestrator | Friday 23 May 2025 00:37:03 +0000 (0:00:01.672) 0:00:12.941 ************ 2025-05-23 00:37:04.620247 | orchestrator | changed: [testbed-manager] 2025-05-23 00:37:04.620525 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:37:04.621207 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:37:04.624393 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:37:04.624463 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:37:04.624477 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:37:04.624534 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:37:04.624945 | orchestrator | 2025-05-23 00:37:04.625914 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-05-23 00:37:04.627036 | orchestrator | Friday 23 May 2025 00:37:04 +0000 (0:00:01.564) 0:00:14.506 ************ 2025-05-23 00:37:06.115564 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:37:06.115784 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:37:06.115912 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:37:06.117102 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:37:06.117687 | orchestrator | ok: [testbed-manager] 2025-05-23 00:37:06.117990 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:37:06.118672 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:37:06.119370 | orchestrator | 2025-05-23 00:37:06.122211 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-05-23 00:37:06.122452 | orchestrator | Friday 23 May 2025 00:37:06 +0000 (0:00:01.501) 0:00:16.007 ************ 2025-05-23 00:37:07.839540 | orchestrator | changed: [testbed-manager] 2025-05-23 00:37:07.840174 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:37:07.841444 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:37:07.842402 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:37:07.843046 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:37:07.844188 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:37:07.845553 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:37:07.846536 | orchestrator | 2025-05-23 00:37:07.847780 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-05-23 00:37:07.848225 | orchestrator | Friday 23 May 2025 00:37:07 +0000 (0:00:01.724) 0:00:17.732 ************ 2025-05-23 00:37:07.993449 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:37:08.068936 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:37:08.143065 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:37:08.216234 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:37:08.447468 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:37:08.588253 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:37:08.589861 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:37:08.592949 | orchestrator | 2025-05-23 00:37:08.592998 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-05-23 00:37:08.593804 | orchestrator | 2025-05-23 00:37:08.594958 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-05-23 00:37:08.596102 | orchestrator | Friday 23 May 2025 00:37:08 +0000 (0:00:00.749) 0:00:18.482 ************ 2025-05-23 00:37:10.984872 | orchestrator | ok: [testbed-manager] 2025-05-23 00:37:10.986111 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:37:10.986475 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:37:10.986990 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:37:10.987710 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:37:10.987895 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:37:10.989274 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:37:10.989299 | orchestrator | 2025-05-23 00:37:10.989381 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 00:37:10.990057 | orchestrator | 2025-05-23 00:37:10 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-23 00:37:10.990082 | orchestrator | 2025-05-23 00:37:10 | INFO  | Please wait and do not abort execution. 2025-05-23 00:37:10.990907 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-23 00:37:10.991070 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-23 00:37:10.992552 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-23 00:37:10.994242 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-23 00:37:10.994266 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-23 00:37:10.994277 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-23 00:37:10.994289 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-23 00:37:10.994389 | orchestrator | 2025-05-23 00:37:10.994708 | orchestrator | Friday 23 May 2025 00:37:10 +0000 (0:00:02.396) 0:00:20.878 ************ 2025-05-23 00:37:10.994984 | orchestrator | =============================================================================== 2025-05-23 00:37:10.995636 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.88s 2025-05-23 00:37:10.996755 | orchestrator | Apply netplan configuration --------------------------------------------- 2.51s 2025-05-23 00:37:10.996796 | orchestrator | Install python3-docker -------------------------------------------------- 2.40s 2025-05-23 00:37:10.996801 | orchestrator | Apply netplan configuration --------------------------------------------- 1.78s 2025-05-23 00:37:10.997022 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.72s 2025-05-23 00:37:10.997031 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.67s 2025-05-23 00:37:10.998403 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.56s 2025-05-23 00:37:10.998417 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.50s 2025-05-23 00:37:10.998422 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.41s 2025-05-23 00:37:10.998427 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.78s 2025-05-23 00:37:10.998432 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.77s 2025-05-23 00:37:10.998437 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.75s 2025-05-23 00:37:11.515854 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-05-23 00:37:12.928846 | orchestrator | 2025-05-23 00:37:12 | INFO  | Task e41142e7-0111-4c7c-b9ea-4baf7a9f7ba7 (reboot) was prepared for execution. 2025-05-23 00:37:12.928942 | orchestrator | 2025-05-23 00:37:12 | INFO  | It takes a moment until task e41142e7-0111-4c7c-b9ea-4baf7a9f7ba7 (reboot) has been started and output is visible here. 2025-05-23 00:37:15.976819 | orchestrator | 2025-05-23 00:37:15.977001 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-23 00:37:15.977722 | orchestrator | 2025-05-23 00:37:15.979224 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-23 00:37:15.980767 | orchestrator | Friday 23 May 2025 00:37:15 +0000 (0:00:00.139) 0:00:00.139 ************ 2025-05-23 00:37:16.069370 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:37:16.069757 | orchestrator | 2025-05-23 00:37:16.070905 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-23 00:37:16.072848 | orchestrator | Friday 23 May 2025 00:37:16 +0000 (0:00:00.095) 0:00:00.235 ************ 2025-05-23 00:37:16.971775 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:37:16.972965 | orchestrator | 2025-05-23 00:37:16.973429 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-23 00:37:16.974441 | orchestrator | Friday 23 May 2025 00:37:16 +0000 (0:00:00.902) 0:00:01.137 ************ 2025-05-23 00:37:17.081026 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:37:17.081497 | orchestrator | 2025-05-23 00:37:17.082887 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-23 00:37:17.083856 | orchestrator | 2025-05-23 00:37:17.084035 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-23 00:37:17.084757 | orchestrator | Friday 23 May 2025 00:37:17 +0000 (0:00:00.106) 0:00:01.244 ************ 2025-05-23 00:37:17.170797 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:37:17.171538 | orchestrator | 2025-05-23 00:37:17.171760 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-23 00:37:17.172426 | orchestrator | Friday 23 May 2025 00:37:17 +0000 (0:00:00.092) 0:00:01.337 ************ 2025-05-23 00:37:17.790313 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:37:17.791062 | orchestrator | 2025-05-23 00:37:17.791527 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-23 00:37:17.792293 | orchestrator | Friday 23 May 2025 00:37:17 +0000 (0:00:00.619) 0:00:01.956 ************ 2025-05-23 00:37:17.904559 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:37:17.906476 | orchestrator | 2025-05-23 00:37:17.908898 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-23 00:37:17.908946 | orchestrator | 2025-05-23 00:37:17.909618 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-23 00:37:17.910690 | orchestrator | Friday 23 May 2025 00:37:17 +0000 (0:00:00.114) 0:00:02.071 ************ 2025-05-23 00:37:17.997722 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:37:17.997916 | orchestrator | 2025-05-23 00:37:17.998493 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-23 00:37:17.998934 | orchestrator | Friday 23 May 2025 00:37:17 +0000 (0:00:00.092) 0:00:02.163 ************ 2025-05-23 00:37:18.754574 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:37:18.755281 | orchestrator | 2025-05-23 00:37:18.756027 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-23 00:37:18.756286 | orchestrator | Friday 23 May 2025 00:37:18 +0000 (0:00:00.756) 0:00:02.920 ************ 2025-05-23 00:37:18.855403 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:37:18.855632 | orchestrator | 2025-05-23 00:37:18.855654 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-23 00:37:18.855866 | orchestrator | 2025-05-23 00:37:18.857524 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-23 00:37:18.857674 | orchestrator | Friday 23 May 2025 00:37:18 +0000 (0:00:00.097) 0:00:03.017 ************ 2025-05-23 00:37:18.945090 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:37:18.945178 | orchestrator | 2025-05-23 00:37:18.945311 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-23 00:37:18.945868 | orchestrator | Friday 23 May 2025 00:37:18 +0000 (0:00:00.091) 0:00:03.109 ************ 2025-05-23 00:37:19.569714 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:37:19.570185 | orchestrator | 2025-05-23 00:37:19.571009 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-23 00:37:19.571524 | orchestrator | Friday 23 May 2025 00:37:19 +0000 (0:00:00.624) 0:00:03.733 ************ 2025-05-23 00:37:19.682759 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:37:19.683752 | orchestrator | 2025-05-23 00:37:19.684131 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-23 00:37:19.686710 | orchestrator | 2025-05-23 00:37:19.686963 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-23 00:37:19.687610 | orchestrator | Friday 23 May 2025 00:37:19 +0000 (0:00:00.115) 0:00:03.849 ************ 2025-05-23 00:37:19.778857 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:37:19.779649 | orchestrator | 2025-05-23 00:37:19.780378 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-23 00:37:19.780865 | orchestrator | Friday 23 May 2025 00:37:19 +0000 (0:00:00.096) 0:00:03.945 ************ 2025-05-23 00:37:20.405503 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:37:20.405768 | orchestrator | 2025-05-23 00:37:20.406919 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-23 00:37:20.407357 | orchestrator | Friday 23 May 2025 00:37:20 +0000 (0:00:00.624) 0:00:04.570 ************ 2025-05-23 00:37:20.526106 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:37:20.526236 | orchestrator | 2025-05-23 00:37:20.526252 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-23 00:37:20.526399 | orchestrator | 2025-05-23 00:37:20.526610 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-23 00:37:20.527241 | orchestrator | Friday 23 May 2025 00:37:20 +0000 (0:00:00.118) 0:00:04.688 ************ 2025-05-23 00:37:20.629974 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:37:20.630108 | orchestrator | 2025-05-23 00:37:20.631477 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-23 00:37:20.631970 | orchestrator | Friday 23 May 2025 00:37:20 +0000 (0:00:00.107) 0:00:04.796 ************ 2025-05-23 00:37:21.286222 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:37:21.286871 | orchestrator | 2025-05-23 00:37:21.287605 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-23 00:37:21.288595 | orchestrator | Friday 23 May 2025 00:37:21 +0000 (0:00:00.653) 0:00:05.450 ************ 2025-05-23 00:37:21.318810 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:37:21.319128 | orchestrator | 2025-05-23 00:37:21.320493 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 00:37:21.322074 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-23 00:37:21.322138 | orchestrator | 2025-05-23 00:37:21 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-23 00:37:21.322197 | orchestrator | 2025-05-23 00:37:21 | INFO  | Please wait and do not abort execution. 2025-05-23 00:37:21.322844 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-23 00:37:21.323244 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-23 00:37:21.323792 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-23 00:37:21.324233 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-23 00:37:21.324646 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-23 00:37:21.325045 | orchestrator | 2025-05-23 00:37:21.325408 | orchestrator | Friday 23 May 2025 00:37:21 +0000 (0:00:00.035) 0:00:05.486 ************ 2025-05-23 00:37:21.325971 | orchestrator | =============================================================================== 2025-05-23 00:37:21.326226 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.18s 2025-05-23 00:37:21.326498 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.59s 2025-05-23 00:37:21.326889 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.58s 2025-05-23 00:37:21.768053 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-05-23 00:37:23.214132 | orchestrator | 2025-05-23 00:37:23 | INFO  | Task b596f781-19f0-4677-be9f-42ce9d7a9e72 (wait-for-connection) was prepared for execution. 2025-05-23 00:37:23.214237 | orchestrator | 2025-05-23 00:37:23 | INFO  | It takes a moment until task b596f781-19f0-4677-be9f-42ce9d7a9e72 (wait-for-connection) has been started and output is visible here. 2025-05-23 00:37:26.244973 | orchestrator | 2025-05-23 00:37:26.245084 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-05-23 00:37:26.245513 | orchestrator | 2025-05-23 00:37:26.246014 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-05-23 00:37:26.247678 | orchestrator | Friday 23 May 2025 00:37:26 +0000 (0:00:00.164) 0:00:00.164 ************ 2025-05-23 00:37:39.811931 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:37:39.812054 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:37:39.812071 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:37:39.812083 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:37:39.812261 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:37:39.813698 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:37:39.814494 | orchestrator | 2025-05-23 00:37:39.815092 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 00:37:39.815335 | orchestrator | 2025-05-23 00:37:39 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-23 00:37:39.815576 | orchestrator | 2025-05-23 00:37:39 | INFO  | Please wait and do not abort execution. 2025-05-23 00:37:39.816308 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 00:37:39.816751 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 00:37:39.816976 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 00:37:39.817490 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 00:37:39.817871 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 00:37:39.818091 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 00:37:39.818786 | orchestrator | 2025-05-23 00:37:39.818894 | orchestrator | Friday 23 May 2025 00:37:39 +0000 (0:00:13.567) 0:00:13.732 ************ 2025-05-23 00:37:39.819393 | orchestrator | =============================================================================== 2025-05-23 00:37:39.819663 | orchestrator | Wait until remote system is reachable ---------------------------------- 13.57s 2025-05-23 00:37:40.282693 | orchestrator | + osism apply hddtemp 2025-05-23 00:37:41.675504 | orchestrator | 2025-05-23 00:37:41 | INFO  | Task a34c0aa6-601c-48bf-a112-3183aa223b95 (hddtemp) was prepared for execution. 2025-05-23 00:37:41.675602 | orchestrator | 2025-05-23 00:37:41 | INFO  | It takes a moment until task a34c0aa6-601c-48bf-a112-3183aa223b95 (hddtemp) has been started and output is visible here. 2025-05-23 00:37:44.725647 | orchestrator | 2025-05-23 00:37:44.729245 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-05-23 00:37:44.729771 | orchestrator | 2025-05-23 00:37:44.730706 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-05-23 00:37:44.731109 | orchestrator | Friday 23 May 2025 00:37:44 +0000 (0:00:00.196) 0:00:00.196 ************ 2025-05-23 00:37:44.869952 | orchestrator | ok: [testbed-manager] 2025-05-23 00:37:44.942227 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:37:45.015465 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:37:45.087831 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:37:45.162355 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:37:45.377194 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:37:45.377566 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:37:45.377886 | orchestrator | 2025-05-23 00:37:45.378166 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-05-23 00:37:45.379062 | orchestrator | Friday 23 May 2025 00:37:45 +0000 (0:00:00.652) 0:00:00.849 ************ 2025-05-23 00:37:46.431131 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 00:37:46.432259 | orchestrator | 2025-05-23 00:37:46.432779 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-05-23 00:37:46.433746 | orchestrator | Friday 23 May 2025 00:37:46 +0000 (0:00:01.055) 0:00:01.904 ************ 2025-05-23 00:37:48.196722 | orchestrator | ok: [testbed-manager] 2025-05-23 00:37:48.196812 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:37:48.197150 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:37:48.198160 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:37:48.199411 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:37:48.199910 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:37:48.200433 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:37:48.200870 | orchestrator | 2025-05-23 00:37:48.201541 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-05-23 00:37:48.201975 | orchestrator | Friday 23 May 2025 00:37:48 +0000 (0:00:01.763) 0:00:03.668 ************ 2025-05-23 00:37:48.701355 | orchestrator | changed: [testbed-manager] 2025-05-23 00:37:49.130923 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:37:49.131009 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:37:49.131112 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:37:49.131171 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:37:49.131452 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:37:49.131763 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:37:49.132259 | orchestrator | 2025-05-23 00:37:49.133982 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-05-23 00:37:49.134700 | orchestrator | Friday 23 May 2025 00:37:49 +0000 (0:00:00.935) 0:00:04.603 ************ 2025-05-23 00:37:50.204044 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:37:50.206686 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:37:50.206720 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:37:50.206732 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:37:50.206984 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:37:50.207453 | orchestrator | ok: [testbed-manager] 2025-05-23 00:37:50.207722 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:37:50.208341 | orchestrator | 2025-05-23 00:37:50.208861 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-05-23 00:37:50.209602 | orchestrator | Friday 23 May 2025 00:37:50 +0000 (0:00:01.072) 0:00:05.675 ************ 2025-05-23 00:37:50.427208 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:37:50.500045 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:37:50.579113 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:37:50.656851 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:37:50.759451 | orchestrator | changed: [testbed-manager] 2025-05-23 00:37:50.760274 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:37:50.760634 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:37:50.762114 | orchestrator | 2025-05-23 00:37:50.765017 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-05-23 00:37:50.765507 | orchestrator | Friday 23 May 2025 00:37:50 +0000 (0:00:00.559) 0:00:06.234 ************ 2025-05-23 00:38:02.919814 | orchestrator | changed: [testbed-manager] 2025-05-23 00:38:02.919935 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:38:02.920292 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:38:02.921959 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:38:02.923997 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:38:02.924620 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:38:02.925870 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:38:02.926724 | orchestrator | 2025-05-23 00:38:02.927368 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-05-23 00:38:02.928084 | orchestrator | Friday 23 May 2025 00:38:02 +0000 (0:00:12.151) 0:00:18.386 ************ 2025-05-23 00:38:04.091170 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 00:38:04.091461 | orchestrator | 2025-05-23 00:38:04.092438 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-05-23 00:38:04.093507 | orchestrator | Friday 23 May 2025 00:38:04 +0000 (0:00:01.173) 0:00:19.560 ************ 2025-05-23 00:38:05.945990 | orchestrator | changed: [testbed-manager] 2025-05-23 00:38:05.946608 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:38:05.947094 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:38:05.948082 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:38:05.949368 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:38:05.949764 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:38:05.950939 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:38:05.951890 | orchestrator | 2025-05-23 00:38:05.953570 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 00:38:05.953615 | orchestrator | 2025-05-23 00:38:05 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-23 00:38:05.953630 | orchestrator | 2025-05-23 00:38:05 | INFO  | Please wait and do not abort execution. 2025-05-23 00:38:05.953756 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 00:38:05.954215 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-23 00:38:05.954598 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-23 00:38:05.955442 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-23 00:38:05.955777 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-23 00:38:05.956174 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-23 00:38:05.957059 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-23 00:38:05.957233 | orchestrator | 2025-05-23 00:38:05.957805 | orchestrator | Friday 23 May 2025 00:38:05 +0000 (0:00:01.858) 0:00:21.419 ************ 2025-05-23 00:38:05.958463 | orchestrator | =============================================================================== 2025-05-23 00:38:05.959102 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.15s 2025-05-23 00:38:05.959611 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.86s 2025-05-23 00:38:05.960162 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.76s 2025-05-23 00:38:05.960497 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.17s 2025-05-23 00:38:05.960930 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.07s 2025-05-23 00:38:05.961192 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.06s 2025-05-23 00:38:05.961657 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 0.94s 2025-05-23 00:38:05.961977 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.65s 2025-05-23 00:38:05.962339 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.56s 2025-05-23 00:38:06.458298 | orchestrator | + sudo systemctl restart docker-compose@manager 2025-05-23 00:38:07.991149 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-05-23 00:38:07.991248 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-05-23 00:38:07.991265 | orchestrator | + local max_attempts=60 2025-05-23 00:38:07.991279 | orchestrator | + local name=ceph-ansible 2025-05-23 00:38:07.991291 | orchestrator | + local attempt_num=1 2025-05-23 00:38:07.991800 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-05-23 00:38:08.028380 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-23 00:38:08.028516 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-05-23 00:38:08.028531 | orchestrator | + local max_attempts=60 2025-05-23 00:38:08.028544 | orchestrator | + local name=kolla-ansible 2025-05-23 00:38:08.028556 | orchestrator | + local attempt_num=1 2025-05-23 00:38:08.028798 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-05-23 00:38:08.052235 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-23 00:38:08.052304 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-05-23 00:38:08.052317 | orchestrator | + local max_attempts=60 2025-05-23 00:38:08.052348 | orchestrator | + local name=osism-ansible 2025-05-23 00:38:08.052360 | orchestrator | + local attempt_num=1 2025-05-23 00:38:08.053051 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-05-23 00:38:08.080642 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-23 00:38:08.080702 | orchestrator | + [[ true == \t\r\u\e ]] 2025-05-23 00:38:08.080717 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-05-23 00:38:08.243642 | orchestrator | ARA in ceph-ansible already disabled. 2025-05-23 00:38:08.405255 | orchestrator | ARA in kolla-ansible already disabled. 2025-05-23 00:38:08.556582 | orchestrator | ARA in osism-ansible already disabled. 2025-05-23 00:38:08.732554 | orchestrator | ARA in osism-kubernetes already disabled. 2025-05-23 00:38:08.733633 | orchestrator | + osism apply gather-facts 2025-05-23 00:38:10.174941 | orchestrator | 2025-05-23 00:38:10 | INFO  | Task 5289a4ec-16cf-4226-8b7b-a1f98234992d (gather-facts) was prepared for execution. 2025-05-23 00:38:10.175041 | orchestrator | 2025-05-23 00:38:10 | INFO  | It takes a moment until task 5289a4ec-16cf-4226-8b7b-a1f98234992d (gather-facts) has been started and output is visible here. 2025-05-23 00:38:13.124387 | orchestrator | 2025-05-23 00:38:13.125459 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-23 00:38:13.128127 | orchestrator | 2025-05-23 00:38:13.128836 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-23 00:38:13.129984 | orchestrator | Friday 23 May 2025 00:38:13 +0000 (0:00:00.119) 0:00:00.119 ************ 2025-05-23 00:38:17.976215 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:38:17.976783 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:38:17.977879 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:38:17.978144 | orchestrator | ok: [testbed-manager] 2025-05-23 00:38:17.981897 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:38:17.982438 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:38:17.983515 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:38:17.984724 | orchestrator | 2025-05-23 00:38:17.985269 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-23 00:38:17.986061 | orchestrator | 2025-05-23 00:38:17.986847 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-23 00:38:17.987361 | orchestrator | Friday 23 May 2025 00:38:17 +0000 (0:00:04.854) 0:00:04.974 ************ 2025-05-23 00:38:18.120345 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:38:18.186621 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:38:18.254786 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:38:18.323102 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:38:18.401666 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:38:18.438192 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:38:18.438329 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:38:18.438950 | orchestrator | 2025-05-23 00:38:18.439730 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 00:38:18.440300 | orchestrator | 2025-05-23 00:38:18 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-23 00:38:18.440453 | orchestrator | 2025-05-23 00:38:18 | INFO  | Please wait and do not abort execution. 2025-05-23 00:38:18.441173 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-23 00:38:18.441856 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-23 00:38:18.442778 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-23 00:38:18.442966 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-23 00:38:18.443570 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-23 00:38:18.443848 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-23 00:38:18.444300 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-23 00:38:18.445001 | orchestrator | 2025-05-23 00:38:18.445230 | orchestrator | Friday 23 May 2025 00:38:18 +0000 (0:00:00.462) 0:00:05.436 ************ 2025-05-23 00:38:18.445745 | orchestrator | =============================================================================== 2025-05-23 00:38:18.446368 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.85s 2025-05-23 00:38:18.446907 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.46s 2025-05-23 00:38:18.789044 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-05-23 00:38:18.806571 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-05-23 00:38:18.823493 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-05-23 00:38:18.834948 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-05-23 00:38:18.853346 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-05-23 00:38:18.861808 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-05-23 00:38:18.875060 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-05-23 00:38:18.884919 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-05-23 00:38:18.893930 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-05-23 00:38:18.904935 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-05-23 00:38:18.914315 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-05-23 00:38:18.923772 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-05-23 00:38:18.932255 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-05-23 00:38:18.941340 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-05-23 00:38:18.950348 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-05-23 00:38:18.961329 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-05-23 00:38:18.979569 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-05-23 00:38:18.996950 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-05-23 00:38:19.008169 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-05-23 00:38:19.017176 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-05-23 00:38:19.028093 | orchestrator | + [[ false == \t\r\u\e ]] 2025-05-23 00:38:19.459867 | orchestrator | ok: Runtime: 0:24:37.453106 2025-05-23 00:38:19.550694 | 2025-05-23 00:38:19.550886 | TASK [Deploy services] 2025-05-23 00:38:20.083667 | orchestrator | skipping: Conditional result was False 2025-05-23 00:38:20.103218 | 2025-05-23 00:38:20.103426 | TASK [Deploy in a nutshell] 2025-05-23 00:38:20.792027 | orchestrator | + set -e 2025-05-23 00:38:20.792152 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-23 00:38:20.792167 | orchestrator | ++ export INTERACTIVE=false 2025-05-23 00:38:20.792176 | orchestrator | ++ INTERACTIVE=false 2025-05-23 00:38:20.792181 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-23 00:38:20.792185 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-23 00:38:20.792199 | orchestrator | + source /opt/manager-vars.sh 2025-05-23 00:38:20.792221 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-23 00:38:20.792232 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-23 00:38:20.792237 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-23 00:38:20.792243 | orchestrator | ++ CEPH_VERSION=reef 2025-05-23 00:38:20.792248 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-23 00:38:20.792255 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-23 00:38:20.792259 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-05-23 00:38:20.792267 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-05-23 00:38:20.792270 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-23 00:38:20.792276 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-23 00:38:20.792280 | orchestrator | ++ export ARA=false 2025-05-23 00:38:20.792284 | orchestrator | ++ ARA=false 2025-05-23 00:38:20.792288 | orchestrator | ++ export TEMPEST=false 2025-05-23 00:38:20.792292 | orchestrator | ++ TEMPEST=false 2025-05-23 00:38:20.792296 | orchestrator | ++ export IS_ZUUL=true 2025-05-23 00:38:20.792300 | orchestrator | ++ IS_ZUUL=true 2025-05-23 00:38:20.792304 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.13 2025-05-23 00:38:20.792307 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.13 2025-05-23 00:38:20.792311 | orchestrator | ++ export EXTERNAL_API=false 2025-05-23 00:38:20.792315 | orchestrator | ++ EXTERNAL_API=false 2025-05-23 00:38:20.792319 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-23 00:38:20.792322 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-23 00:38:20.792326 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-23 00:38:20.792330 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-23 00:38:20.792334 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-23 00:38:20.792337 | orchestrator | 2025-05-23 00:38:20.792342 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-23 00:38:20.792345 | orchestrator | + echo 2025-05-23 00:38:20.792349 | orchestrator | # PULL IMAGES 2025-05-23 00:38:20.792353 | orchestrator | 2025-05-23 00:38:20.792357 | orchestrator | + echo '# PULL IMAGES' 2025-05-23 00:38:20.792361 | orchestrator | + echo 2025-05-23 00:38:20.793538 | orchestrator | ++ semver 8.1.0 7.0.0 2025-05-23 00:38:20.823309 | orchestrator | + [[ 1 -ge 0 ]] 2025-05-23 00:38:20.823344 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-05-23 00:38:22.199935 | orchestrator | 2025-05-23 00:38:22 | INFO  | Trying to run play pull-images in environment custom 2025-05-23 00:38:22.245239 | orchestrator | 2025-05-23 00:38:22 | INFO  | Task c8dca62a-d993-4569-adc3-4708d746ece8 (pull-images) was prepared for execution. 2025-05-23 00:38:22.245299 | orchestrator | 2025-05-23 00:38:22 | INFO  | It takes a moment until task c8dca62a-d993-4569-adc3-4708d746ece8 (pull-images) has been started and output is visible here. 2025-05-23 00:38:25.002430 | orchestrator | 2025-05-23 00:38:25.002482 | orchestrator | PLAY [Pull images] ************************************************************* 2025-05-23 00:38:25.002499 | orchestrator | 2025-05-23 00:38:25.003173 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-05-23 00:38:25.003714 | orchestrator | Friday 23 May 2025 00:38:24 +0000 (0:00:00.104) 0:00:00.104 ************ 2025-05-23 00:39:01.657190 | orchestrator | changed: [testbed-manager] 2025-05-23 00:39:01.657322 | orchestrator | 2025-05-23 00:39:01.659172 | orchestrator | TASK [Pull other images] ******************************************************* 2025-05-23 00:39:01.659276 | orchestrator | Friday 23 May 2025 00:39:01 +0000 (0:00:36.653) 0:00:36.758 ************ 2025-05-23 00:39:46.823190 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-05-23 00:39:46.823336 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-05-23 00:39:46.823354 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-05-23 00:39:46.823366 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-05-23 00:39:46.823377 | orchestrator | changed: [testbed-manager] => (item=common) 2025-05-23 00:39:46.823388 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-05-23 00:39:46.823400 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-05-23 00:39:46.823423 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-05-23 00:39:46.823682 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-05-23 00:39:46.823836 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-05-23 00:39:46.824219 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-05-23 00:39:46.824566 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-05-23 00:39:46.828092 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-05-23 00:39:46.828134 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-05-23 00:39:46.828145 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-05-23 00:39:46.828156 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-05-23 00:39:46.828167 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-05-23 00:39:46.828178 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-05-23 00:39:46.828189 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-05-23 00:39:46.828200 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-05-23 00:39:46.832240 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-05-23 00:39:46.832267 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-05-23 00:39:46.832278 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-05-23 00:39:46.832289 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-05-23 00:39:46.832300 | orchestrator | 2025-05-23 00:39:46.832353 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 00:39:46.832452 | orchestrator | 2025-05-23 00:39:46 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-23 00:39:46.832619 | orchestrator | 2025-05-23 00:39:46 | INFO  | Please wait and do not abort execution. 2025-05-23 00:39:46.833726 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 00:39:46.835588 | orchestrator | 2025-05-23 00:39:46.835646 | orchestrator | Friday 23 May 2025 00:39:46 +0000 (0:00:45.160) 0:01:21.919 ************ 2025-05-23 00:39:46.836334 | orchestrator | =============================================================================== 2025-05-23 00:39:46.838407 | orchestrator | Pull other images ------------------------------------------------------ 45.16s 2025-05-23 00:39:46.838700 | orchestrator | Pull keystone image ---------------------------------------------------- 36.65s 2025-05-23 00:39:48.935386 | orchestrator | 2025-05-23 00:39:48 | INFO  | Trying to run play wipe-partitions in environment custom 2025-05-23 00:39:49.010120 | orchestrator | 2025-05-23 00:39:49 | INFO  | Task 824fe0c1-53e8-4e44-b3ff-f8666b4b10d5 (wipe-partitions) was prepared for execution. 2025-05-23 00:39:49.010222 | orchestrator | 2025-05-23 00:39:49 | INFO  | It takes a moment until task 824fe0c1-53e8-4e44-b3ff-f8666b4b10d5 (wipe-partitions) has been started and output is visible here. 2025-05-23 00:39:52.309248 | orchestrator | 2025-05-23 00:39:52.311202 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-05-23 00:39:52.312351 | orchestrator | 2025-05-23 00:39:52.313664 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-05-23 00:39:52.316391 | orchestrator | Friday 23 May 2025 00:39:52 +0000 (0:00:00.128) 0:00:00.128 ************ 2025-05-23 00:39:52.896911 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:39:52.897233 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:39:52.898778 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:39:52.902715 | orchestrator | 2025-05-23 00:39:52.902906 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-05-23 00:39:52.903322 | orchestrator | Friday 23 May 2025 00:39:52 +0000 (0:00:00.592) 0:00:00.721 ************ 2025-05-23 00:39:53.039961 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:39:53.127694 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:39:53.129112 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:39:53.131430 | orchestrator | 2025-05-23 00:39:53.133320 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-05-23 00:39:53.134923 | orchestrator | Friday 23 May 2025 00:39:53 +0000 (0:00:00.229) 0:00:00.950 ************ 2025-05-23 00:39:53.843373 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:39:53.843475 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:39:53.843847 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:39:53.844354 | orchestrator | 2025-05-23 00:39:53.846309 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-05-23 00:39:53.849078 | orchestrator | Friday 23 May 2025 00:39:53 +0000 (0:00:00.711) 0:00:01.662 ************ 2025-05-23 00:39:54.004465 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:39:54.121599 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:39:54.121746 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:39:54.121823 | orchestrator | 2025-05-23 00:39:54.121842 | orchestrator | TASK [Check device availability] *********************************************** 2025-05-23 00:39:54.122137 | orchestrator | Friday 23 May 2025 00:39:54 +0000 (0:00:00.280) 0:00:01.942 ************ 2025-05-23 00:39:55.275735 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-05-23 00:39:55.279377 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-05-23 00:39:55.280581 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-05-23 00:39:55.281267 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-05-23 00:39:55.282299 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-05-23 00:39:55.283176 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-05-23 00:39:55.283711 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-05-23 00:39:55.284983 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-05-23 00:39:55.285004 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-05-23 00:39:55.285330 | orchestrator | 2025-05-23 00:39:55.285983 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-05-23 00:39:55.286492 | orchestrator | Friday 23 May 2025 00:39:55 +0000 (0:00:01.156) 0:00:03.099 ************ 2025-05-23 00:39:56.600973 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-05-23 00:39:56.601214 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-05-23 00:39:56.601958 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-05-23 00:39:56.602511 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-05-23 00:39:56.602860 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-05-23 00:39:56.603248 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-05-23 00:39:56.603683 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-05-23 00:39:56.604109 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-05-23 00:39:56.604977 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-05-23 00:39:56.604997 | orchestrator | 2025-05-23 00:39:56.605431 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-05-23 00:39:56.605585 | orchestrator | Friday 23 May 2025 00:39:56 +0000 (0:00:01.326) 0:00:04.425 ************ 2025-05-23 00:39:58.816741 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-05-23 00:39:58.816850 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-05-23 00:39:58.816865 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-05-23 00:39:58.816880 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-05-23 00:39:58.818251 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-05-23 00:39:58.818700 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-05-23 00:39:58.821355 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-05-23 00:39:58.821684 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-05-23 00:39:58.822144 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-05-23 00:39:58.822628 | orchestrator | 2025-05-23 00:39:58.823106 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-05-23 00:39:58.823609 | orchestrator | Friday 23 May 2025 00:39:58 +0000 (0:00:02.211) 0:00:06.637 ************ 2025-05-23 00:39:59.426292 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:39:59.426390 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:39:59.426403 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:39:59.429228 | orchestrator | 2025-05-23 00:39:59.429635 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-05-23 00:39:59.430153 | orchestrator | Friday 23 May 2025 00:39:59 +0000 (0:00:00.611) 0:00:07.248 ************ 2025-05-23 00:40:00.061735 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:40:00.061834 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:40:00.061848 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:40:00.061904 | orchestrator | 2025-05-23 00:40:00.061917 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 00:40:00.061952 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-23 00:40:00.061965 | orchestrator | 2025-05-23 00:40:00 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-23 00:40:00.061977 | orchestrator | 2025-05-23 00:40:00 | INFO  | Please wait and do not abort execution. 2025-05-23 00:40:00.061989 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-23 00:40:00.062265 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-23 00:40:00.062286 | orchestrator | 2025-05-23 00:40:00.062634 | orchestrator | Friday 23 May 2025 00:40:00 +0000 (0:00:00.629) 0:00:07.879 ************ 2025-05-23 00:40:00.063078 | orchestrator | =============================================================================== 2025-05-23 00:40:00.063396 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.21s 2025-05-23 00:40:00.063677 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.33s 2025-05-23 00:40:00.064042 | orchestrator | Check device availability ----------------------------------------------- 1.16s 2025-05-23 00:40:00.067291 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.71s 2025-05-23 00:40:00.067574 | orchestrator | Request device events from the kernel ----------------------------------- 0.63s 2025-05-23 00:40:00.068029 | orchestrator | Reload udev rules ------------------------------------------------------- 0.61s 2025-05-23 00:40:00.068406 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.59s 2025-05-23 00:40:00.068700 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.28s 2025-05-23 00:40:00.071517 | orchestrator | Remove all rook related logical devices --------------------------------- 0.23s 2025-05-23 00:40:02.145290 | orchestrator | 2025-05-23 00:40:02 | INFO  | Task 240acf30-2c30-4aac-a0f6-cfe1d8ba71c8 (facts) was prepared for execution. 2025-05-23 00:40:02.145393 | orchestrator | 2025-05-23 00:40:02 | INFO  | It takes a moment until task 240acf30-2c30-4aac-a0f6-cfe1d8ba71c8 (facts) has been started and output is visible here. 2025-05-23 00:40:05.127751 | orchestrator | 2025-05-23 00:40:05.128265 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-05-23 00:40:05.129286 | orchestrator | 2025-05-23 00:40:05.129718 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-23 00:40:05.131564 | orchestrator | Friday 23 May 2025 00:40:05 +0000 (0:00:00.164) 0:00:00.164 ************ 2025-05-23 00:40:06.111805 | orchestrator | ok: [testbed-manager] 2025-05-23 00:40:06.111997 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:40:06.113944 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:40:06.115733 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:40:06.116955 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:40:06.118655 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:40:06.119000 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:40:06.119626 | orchestrator | 2025-05-23 00:40:06.120126 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-23 00:40:06.121186 | orchestrator | Friday 23 May 2025 00:40:06 +0000 (0:00:00.980) 0:00:01.145 ************ 2025-05-23 00:40:06.273190 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:40:06.346972 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:40:06.417911 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:40:06.490479 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:40:06.557988 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:40:07.173307 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:40:07.174179 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:40:07.174647 | orchestrator | 2025-05-23 00:40:07.175389 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-23 00:40:07.176018 | orchestrator | 2025-05-23 00:40:07.176577 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-23 00:40:07.177001 | orchestrator | Friday 23 May 2025 00:40:07 +0000 (0:00:01.069) 0:00:02.214 ************ 2025-05-23 00:40:11.799083 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:40:11.799917 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:40:11.801265 | orchestrator | ok: [testbed-manager] 2025-05-23 00:40:11.802309 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:40:11.803627 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:40:11.804602 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:40:11.805575 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:40:11.806664 | orchestrator | 2025-05-23 00:40:11.807332 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-23 00:40:11.808064 | orchestrator | 2025-05-23 00:40:11.808922 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-23 00:40:11.809535 | orchestrator | Friday 23 May 2025 00:40:11 +0000 (0:00:04.621) 0:00:06.836 ************ 2025-05-23 00:40:12.099141 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:40:12.169056 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:40:12.243032 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:40:12.312934 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:40:12.383672 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:40:12.420827 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:40:12.421419 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:40:12.422092 | orchestrator | 2025-05-23 00:40:12.422655 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 00:40:12.423632 | orchestrator | 2025-05-23 00:40:12 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-23 00:40:12.423740 | orchestrator | 2025-05-23 00:40:12 | INFO  | Please wait and do not abort execution. 2025-05-23 00:40:12.424470 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-23 00:40:12.425302 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-23 00:40:12.425610 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-23 00:40:12.426344 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-23 00:40:12.426503 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-23 00:40:12.427231 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-23 00:40:12.427706 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-23 00:40:12.428094 | orchestrator | 2025-05-23 00:40:12.428496 | orchestrator | Friday 23 May 2025 00:40:12 +0000 (0:00:00.624) 0:00:07.461 ************ 2025-05-23 00:40:12.430259 | orchestrator | =============================================================================== 2025-05-23 00:40:12.431947 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.62s 2025-05-23 00:40:12.432613 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.07s 2025-05-23 00:40:12.433191 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.98s 2025-05-23 00:40:12.434199 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.62s 2025-05-23 00:40:14.755794 | orchestrator | 2025-05-23 00:40:14 | INFO  | Task 97898df5-ba79-45e5-888f-a3fe291f4d19 (ceph-configure-lvm-volumes) was prepared for execution. 2025-05-23 00:40:14.755896 | orchestrator | 2025-05-23 00:40:14 | INFO  | It takes a moment until task 97898df5-ba79-45e5-888f-a3fe291f4d19 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-05-23 00:40:17.898740 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-23 00:40:18.431785 | orchestrator | 2025-05-23 00:40:18.433174 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-23 00:40:18.435076 | orchestrator | 2025-05-23 00:40:18.435107 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-23 00:40:18.437700 | orchestrator | Friday 23 May 2025 00:40:18 +0000 (0:00:00.456) 0:00:00.456 ************ 2025-05-23 00:40:18.725952 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-23 00:40:18.727284 | orchestrator | 2025-05-23 00:40:18.727352 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-23 00:40:18.727420 | orchestrator | Friday 23 May 2025 00:40:18 +0000 (0:00:00.293) 0:00:00.750 ************ 2025-05-23 00:40:18.987843 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:40:18.990344 | orchestrator | 2025-05-23 00:40:18.991767 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:40:18.993969 | orchestrator | Friday 23 May 2025 00:40:18 +0000 (0:00:00.262) 0:00:01.012 ************ 2025-05-23 00:40:19.813186 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-05-23 00:40:19.829840 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-05-23 00:40:19.833710 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-05-23 00:40:19.842701 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-05-23 00:40:19.842874 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-05-23 00:40:19.847327 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-05-23 00:40:19.847514 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-05-23 00:40:19.850946 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-05-23 00:40:19.851105 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-05-23 00:40:19.863020 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-05-23 00:40:19.863066 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-05-23 00:40:19.866242 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-05-23 00:40:19.871093 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-05-23 00:40:19.872414 | orchestrator | 2025-05-23 00:40:19.881470 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:40:19.886141 | orchestrator | Friday 23 May 2025 00:40:19 +0000 (0:00:00.822) 0:00:01.835 ************ 2025-05-23 00:40:20.059174 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:40:20.061853 | orchestrator | 2025-05-23 00:40:20.061879 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:40:20.061892 | orchestrator | Friday 23 May 2025 00:40:20 +0000 (0:00:00.242) 0:00:02.077 ************ 2025-05-23 00:40:20.265427 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:40:20.267143 | orchestrator | 2025-05-23 00:40:20.268404 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:40:20.268745 | orchestrator | Friday 23 May 2025 00:40:20 +0000 (0:00:00.207) 0:00:02.285 ************ 2025-05-23 00:40:20.537621 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:40:20.537718 | orchestrator | 2025-05-23 00:40:20.537866 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:40:20.538287 | orchestrator | Friday 23 May 2025 00:40:20 +0000 (0:00:00.275) 0:00:02.560 ************ 2025-05-23 00:40:20.760978 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:40:20.763453 | orchestrator | 2025-05-23 00:40:20.763689 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:40:20.764226 | orchestrator | Friday 23 May 2025 00:40:20 +0000 (0:00:00.221) 0:00:02.782 ************ 2025-05-23 00:40:20.982418 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:40:20.985483 | orchestrator | 2025-05-23 00:40:20.988728 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:40:20.989100 | orchestrator | Friday 23 May 2025 00:40:20 +0000 (0:00:00.224) 0:00:03.006 ************ 2025-05-23 00:40:21.205937 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:40:21.209836 | orchestrator | 2025-05-23 00:40:21.223341 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:40:21.224202 | orchestrator | Friday 23 May 2025 00:40:21 +0000 (0:00:00.226) 0:00:03.232 ************ 2025-05-23 00:40:21.492880 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:40:21.494126 | orchestrator | 2025-05-23 00:40:21.496462 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:40:21.496965 | orchestrator | Friday 23 May 2025 00:40:21 +0000 (0:00:00.284) 0:00:03.517 ************ 2025-05-23 00:40:21.752130 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:40:21.752353 | orchestrator | 2025-05-23 00:40:21.752811 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:40:21.754248 | orchestrator | Friday 23 May 2025 00:40:21 +0000 (0:00:00.259) 0:00:03.777 ************ 2025-05-23 00:40:22.356807 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e91133d1-5a4c-4c6b-aae9-a3102c4d2118) 2025-05-23 00:40:22.357024 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e91133d1-5a4c-4c6b-aae9-a3102c4d2118) 2025-05-23 00:40:22.358420 | orchestrator | 2025-05-23 00:40:22.359050 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:40:22.359978 | orchestrator | Friday 23 May 2025 00:40:22 +0000 (0:00:00.604) 0:00:04.382 ************ 2025-05-23 00:40:23.168071 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3c0d7b27-8ebd-4816-b389-8c3a005395e5) 2025-05-23 00:40:23.168177 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3c0d7b27-8ebd-4816-b389-8c3a005395e5) 2025-05-23 00:40:23.168192 | orchestrator | 2025-05-23 00:40:23.169120 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:40:23.169789 | orchestrator | Friday 23 May 2025 00:40:23 +0000 (0:00:00.809) 0:00:05.191 ************ 2025-05-23 00:40:23.610357 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_eb878625-a80c-49f3-a757-e0a303c4dd75) 2025-05-23 00:40:23.612419 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_eb878625-a80c-49f3-a757-e0a303c4dd75) 2025-05-23 00:40:23.613677 | orchestrator | 2025-05-23 00:40:23.615295 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:40:23.617125 | orchestrator | Friday 23 May 2025 00:40:23 +0000 (0:00:00.444) 0:00:05.635 ************ 2025-05-23 00:40:24.061131 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_252b3cc1-c875-426d-9475-c1c0edf2ac3c) 2025-05-23 00:40:24.061228 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_252b3cc1-c875-426d-9475-c1c0edf2ac3c) 2025-05-23 00:40:24.061297 | orchestrator | 2025-05-23 00:40:24.061434 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:40:24.061847 | orchestrator | Friday 23 May 2025 00:40:24 +0000 (0:00:00.449) 0:00:06.085 ************ 2025-05-23 00:40:24.373137 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-23 00:40:24.373288 | orchestrator | 2025-05-23 00:40:24.374467 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:40:24.374671 | orchestrator | Friday 23 May 2025 00:40:24 +0000 (0:00:00.316) 0:00:06.401 ************ 2025-05-23 00:40:24.763504 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-05-23 00:40:24.763610 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-05-23 00:40:24.763890 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-05-23 00:40:24.764542 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-05-23 00:40:24.765588 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-05-23 00:40:24.766476 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-05-23 00:40:24.769363 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-05-23 00:40:24.769751 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-05-23 00:40:24.772083 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-05-23 00:40:24.773112 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-05-23 00:40:24.773331 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-05-23 00:40:24.773662 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-05-23 00:40:24.773952 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-05-23 00:40:24.774234 | orchestrator | 2025-05-23 00:40:24.774613 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:40:24.774811 | orchestrator | Friday 23 May 2025 00:40:24 +0000 (0:00:00.388) 0:00:06.789 ************ 2025-05-23 00:40:24.929010 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:40:24.932694 | orchestrator | 2025-05-23 00:40:24.933131 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:40:24.933269 | orchestrator | Friday 23 May 2025 00:40:24 +0000 (0:00:00.166) 0:00:06.955 ************ 2025-05-23 00:40:25.136420 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:40:25.136503 | orchestrator | 2025-05-23 00:40:25.140469 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:40:25.141104 | orchestrator | Friday 23 May 2025 00:40:25 +0000 (0:00:00.206) 0:00:07.162 ************ 2025-05-23 00:40:25.332269 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:40:25.332867 | orchestrator | 2025-05-23 00:40:25.334589 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:40:25.336320 | orchestrator | Friday 23 May 2025 00:40:25 +0000 (0:00:00.196) 0:00:07.359 ************ 2025-05-23 00:40:25.515486 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:40:25.518096 | orchestrator | 2025-05-23 00:40:25.519476 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:40:25.520404 | orchestrator | Friday 23 May 2025 00:40:25 +0000 (0:00:00.181) 0:00:07.541 ************ 2025-05-23 00:40:25.823511 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:40:25.826634 | orchestrator | 2025-05-23 00:40:25.826953 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:40:25.828522 | orchestrator | Friday 23 May 2025 00:40:25 +0000 (0:00:00.308) 0:00:07.849 ************ 2025-05-23 00:40:26.012658 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:40:26.012840 | orchestrator | 2025-05-23 00:40:26.013051 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:40:26.013904 | orchestrator | Friday 23 May 2025 00:40:26 +0000 (0:00:00.190) 0:00:08.039 ************ 2025-05-23 00:40:26.193145 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:40:26.193219 | orchestrator | 2025-05-23 00:40:26.194108 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:40:26.194186 | orchestrator | Friday 23 May 2025 00:40:26 +0000 (0:00:00.179) 0:00:08.219 ************ 2025-05-23 00:40:26.376974 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:40:26.381535 | orchestrator | 2025-05-23 00:40:26.381550 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:40:26.381574 | orchestrator | Friday 23 May 2025 00:40:26 +0000 (0:00:00.184) 0:00:08.404 ************ 2025-05-23 00:40:26.984909 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-05-23 00:40:26.984990 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-05-23 00:40:26.988532 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-05-23 00:40:26.990101 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-05-23 00:40:26.990203 | orchestrator | 2025-05-23 00:40:26.990444 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:40:26.990816 | orchestrator | Friday 23 May 2025 00:40:26 +0000 (0:00:00.604) 0:00:09.009 ************ 2025-05-23 00:40:27.167390 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:40:27.172150 | orchestrator | 2025-05-23 00:40:27.172195 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:40:27.172209 | orchestrator | Friday 23 May 2025 00:40:27 +0000 (0:00:00.186) 0:00:09.195 ************ 2025-05-23 00:40:27.342706 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:40:27.342782 | orchestrator | 2025-05-23 00:40:27.347179 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:40:27.348145 | orchestrator | Friday 23 May 2025 00:40:27 +0000 (0:00:00.174) 0:00:09.370 ************ 2025-05-23 00:40:27.529776 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:40:27.530105 | orchestrator | 2025-05-23 00:40:27.533232 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:40:27.533548 | orchestrator | Friday 23 May 2025 00:40:27 +0000 (0:00:00.184) 0:00:09.554 ************ 2025-05-23 00:40:27.722156 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:40:27.722235 | orchestrator | 2025-05-23 00:40:27.722731 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-23 00:40:27.725021 | orchestrator | Friday 23 May 2025 00:40:27 +0000 (0:00:00.194) 0:00:09.749 ************ 2025-05-23 00:40:27.898960 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-05-23 00:40:27.899044 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-05-23 00:40:27.899212 | orchestrator | 2025-05-23 00:40:27.899757 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-23 00:40:27.900451 | orchestrator | Friday 23 May 2025 00:40:27 +0000 (0:00:00.172) 0:00:09.922 ************ 2025-05-23 00:40:28.024416 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:40:28.026509 | orchestrator | 2025-05-23 00:40:28.028196 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-23 00:40:28.029870 | orchestrator | Friday 23 May 2025 00:40:28 +0000 (0:00:00.126) 0:00:10.048 ************ 2025-05-23 00:40:28.287003 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:40:28.287192 | orchestrator | 2025-05-23 00:40:28.287702 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-23 00:40:28.288222 | orchestrator | Friday 23 May 2025 00:40:28 +0000 (0:00:00.262) 0:00:10.311 ************ 2025-05-23 00:40:28.429917 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:40:28.430078 | orchestrator | 2025-05-23 00:40:28.430634 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-23 00:40:28.431276 | orchestrator | Friday 23 May 2025 00:40:28 +0000 (0:00:00.143) 0:00:10.455 ************ 2025-05-23 00:40:28.550967 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:40:28.551522 | orchestrator | 2025-05-23 00:40:28.553509 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-23 00:40:28.553961 | orchestrator | Friday 23 May 2025 00:40:28 +0000 (0:00:00.122) 0:00:10.577 ************ 2025-05-23 00:40:28.831651 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '17b95678-9240-5166-938b-e89fe6559568'}}) 2025-05-23 00:40:28.831765 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0'}}) 2025-05-23 00:40:28.831847 | orchestrator | 2025-05-23 00:40:28.832145 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-23 00:40:28.832310 | orchestrator | Friday 23 May 2025 00:40:28 +0000 (0:00:00.277) 0:00:10.855 ************ 2025-05-23 00:40:29.021501 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '17b95678-9240-5166-938b-e89fe6559568'}})  2025-05-23 00:40:29.021671 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0'}})  2025-05-23 00:40:29.022121 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:40:29.022182 | orchestrator | 2025-05-23 00:40:29.022308 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-23 00:40:29.022589 | orchestrator | Friday 23 May 2025 00:40:29 +0000 (0:00:00.190) 0:00:11.045 ************ 2025-05-23 00:40:29.236528 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '17b95678-9240-5166-938b-e89fe6559568'}})  2025-05-23 00:40:29.238503 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0'}})  2025-05-23 00:40:29.238670 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:40:29.239053 | orchestrator | 2025-05-23 00:40:29.239410 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-23 00:40:29.242368 | orchestrator | Friday 23 May 2025 00:40:29 +0000 (0:00:00.212) 0:00:11.257 ************ 2025-05-23 00:40:29.421862 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '17b95678-9240-5166-938b-e89fe6559568'}})  2025-05-23 00:40:29.421963 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0'}})  2025-05-23 00:40:29.421977 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:40:29.421990 | orchestrator | 2025-05-23 00:40:29.422002 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-23 00:40:29.422074 | orchestrator | Friday 23 May 2025 00:40:29 +0000 (0:00:00.188) 0:00:11.446 ************ 2025-05-23 00:40:29.652448 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:40:29.653358 | orchestrator | 2025-05-23 00:40:29.653920 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-23 00:40:29.654103 | orchestrator | Friday 23 May 2025 00:40:29 +0000 (0:00:00.231) 0:00:11.677 ************ 2025-05-23 00:40:29.882843 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:40:29.882958 | orchestrator | 2025-05-23 00:40:29.883036 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-23 00:40:29.883051 | orchestrator | Friday 23 May 2025 00:40:29 +0000 (0:00:00.231) 0:00:11.909 ************ 2025-05-23 00:40:30.028762 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:40:30.028907 | orchestrator | 2025-05-23 00:40:30.030195 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-23 00:40:30.032310 | orchestrator | Friday 23 May 2025 00:40:30 +0000 (0:00:00.146) 0:00:12.055 ************ 2025-05-23 00:40:30.196060 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:40:30.196205 | orchestrator | 2025-05-23 00:40:30.196290 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-23 00:40:30.196307 | orchestrator | Friday 23 May 2025 00:40:30 +0000 (0:00:00.165) 0:00:12.221 ************ 2025-05-23 00:40:30.381246 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:40:30.381372 | orchestrator | 2025-05-23 00:40:30.381399 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-23 00:40:30.382640 | orchestrator | Friday 23 May 2025 00:40:30 +0000 (0:00:00.186) 0:00:12.407 ************ 2025-05-23 00:40:30.740308 | orchestrator | ok: [testbed-node-3] => { 2025-05-23 00:40:30.740475 | orchestrator |  "ceph_osd_devices": { 2025-05-23 00:40:30.741452 | orchestrator |  "sdb": { 2025-05-23 00:40:30.741480 | orchestrator |  "osd_lvm_uuid": "17b95678-9240-5166-938b-e89fe6559568" 2025-05-23 00:40:30.742357 | orchestrator |  }, 2025-05-23 00:40:30.742857 | orchestrator |  "sdc": { 2025-05-23 00:40:30.744807 | orchestrator |  "osd_lvm_uuid": "8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0" 2025-05-23 00:40:30.744840 | orchestrator |  } 2025-05-23 00:40:30.745813 | orchestrator |  } 2025-05-23 00:40:30.745976 | orchestrator | } 2025-05-23 00:40:30.745997 | orchestrator | 2025-05-23 00:40:30.749907 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-23 00:40:30.749939 | orchestrator | Friday 23 May 2025 00:40:30 +0000 (0:00:00.358) 0:00:12.766 ************ 2025-05-23 00:40:30.879843 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:40:30.879941 | orchestrator | 2025-05-23 00:40:30.879978 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-23 00:40:30.882187 | orchestrator | Friday 23 May 2025 00:40:30 +0000 (0:00:00.137) 0:00:12.903 ************ 2025-05-23 00:40:31.010905 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:40:31.011321 | orchestrator | 2025-05-23 00:40:31.012004 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-23 00:40:31.012482 | orchestrator | Friday 23 May 2025 00:40:31 +0000 (0:00:00.134) 0:00:13.037 ************ 2025-05-23 00:40:31.153259 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:40:31.153347 | orchestrator | 2025-05-23 00:40:31.153428 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-23 00:40:31.153957 | orchestrator | Friday 23 May 2025 00:40:31 +0000 (0:00:00.142) 0:00:13.180 ************ 2025-05-23 00:40:31.435192 | orchestrator | changed: [testbed-node-3] => { 2025-05-23 00:40:31.435689 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-23 00:40:31.436679 | orchestrator |  "ceph_osd_devices": { 2025-05-23 00:40:31.437959 | orchestrator |  "sdb": { 2025-05-23 00:40:31.438683 | orchestrator |  "osd_lvm_uuid": "17b95678-9240-5166-938b-e89fe6559568" 2025-05-23 00:40:31.439557 | orchestrator |  }, 2025-05-23 00:40:31.442197 | orchestrator |  "sdc": { 2025-05-23 00:40:31.442630 | orchestrator |  "osd_lvm_uuid": "8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0" 2025-05-23 00:40:31.443434 | orchestrator |  } 2025-05-23 00:40:31.446645 | orchestrator |  }, 2025-05-23 00:40:31.449055 | orchestrator |  "lvm_volumes": [ 2025-05-23 00:40:31.449189 | orchestrator |  { 2025-05-23 00:40:31.449453 | orchestrator |  "data": "osd-block-17b95678-9240-5166-938b-e89fe6559568", 2025-05-23 00:40:31.449948 | orchestrator |  "data_vg": "ceph-17b95678-9240-5166-938b-e89fe6559568" 2025-05-23 00:40:31.450344 | orchestrator |  }, 2025-05-23 00:40:31.450619 | orchestrator |  { 2025-05-23 00:40:31.451483 | orchestrator |  "data": "osd-block-8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0", 2025-05-23 00:40:31.451636 | orchestrator |  "data_vg": "ceph-8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0" 2025-05-23 00:40:31.452437 | orchestrator |  } 2025-05-23 00:40:31.452633 | orchestrator |  ] 2025-05-23 00:40:31.453762 | orchestrator |  } 2025-05-23 00:40:31.454272 | orchestrator | } 2025-05-23 00:40:31.454441 | orchestrator | 2025-05-23 00:40:31.455346 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-23 00:40:31.455670 | orchestrator | Friday 23 May 2025 00:40:31 +0000 (0:00:00.281) 0:00:13.462 ************ 2025-05-23 00:40:33.604356 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-23 00:40:33.604469 | orchestrator | 2025-05-23 00:40:33.604497 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-23 00:40:33.604514 | orchestrator | 2025-05-23 00:40:33.604526 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-23 00:40:33.605391 | orchestrator | Friday 23 May 2025 00:40:33 +0000 (0:00:02.166) 0:00:15.629 ************ 2025-05-23 00:40:33.866122 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-23 00:40:33.866229 | orchestrator | 2025-05-23 00:40:33.868233 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-23 00:40:33.868987 | orchestrator | Friday 23 May 2025 00:40:33 +0000 (0:00:00.262) 0:00:15.891 ************ 2025-05-23 00:40:34.090716 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:40:34.091073 | orchestrator | 2025-05-23 00:40:34.091626 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:40:34.094339 | orchestrator | Friday 23 May 2025 00:40:34 +0000 (0:00:00.226) 0:00:16.118 ************ 2025-05-23 00:40:34.466442 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-05-23 00:40:34.467286 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-05-23 00:40:34.467476 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-05-23 00:40:34.467984 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-05-23 00:40:34.468658 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-05-23 00:40:34.471220 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-05-23 00:40:34.471414 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-05-23 00:40:34.471683 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-05-23 00:40:34.471765 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-05-23 00:40:34.472138 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-05-23 00:40:34.472267 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-05-23 00:40:34.472769 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-05-23 00:40:34.473175 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-05-23 00:40:34.473196 | orchestrator | 2025-05-23 00:40:34.473473 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:40:34.473742 | orchestrator | Friday 23 May 2025 00:40:34 +0000 (0:00:00.375) 0:00:16.493 ************ 2025-05-23 00:40:34.641783 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:40:34.643544 | orchestrator | 2025-05-23 00:40:34.643614 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:40:34.643889 | orchestrator | Friday 23 May 2025 00:40:34 +0000 (0:00:00.174) 0:00:16.668 ************ 2025-05-23 00:40:34.813277 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:40:34.813361 | orchestrator | 2025-05-23 00:40:34.814166 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:40:34.814536 | orchestrator | Friday 23 May 2025 00:40:34 +0000 (0:00:00.171) 0:00:16.840 ************ 2025-05-23 00:40:34.983913 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:40:34.985094 | orchestrator | 2025-05-23 00:40:34.985350 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:40:34.985443 | orchestrator | Friday 23 May 2025 00:40:34 +0000 (0:00:00.171) 0:00:17.011 ************ 2025-05-23 00:40:35.151885 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:40:35.151998 | orchestrator | 2025-05-23 00:40:35.152390 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:40:35.155131 | orchestrator | Friday 23 May 2025 00:40:35 +0000 (0:00:00.167) 0:00:17.179 ************ 2025-05-23 00:40:35.550306 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:40:35.550446 | orchestrator | 2025-05-23 00:40:35.550671 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:40:35.550778 | orchestrator | Friday 23 May 2025 00:40:35 +0000 (0:00:00.397) 0:00:17.577 ************ 2025-05-23 00:40:35.731977 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:40:35.732194 | orchestrator | 2025-05-23 00:40:35.732720 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:40:35.732930 | orchestrator | Friday 23 May 2025 00:40:35 +0000 (0:00:00.182) 0:00:17.760 ************ 2025-05-23 00:40:35.924797 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:40:35.924985 | orchestrator | 2025-05-23 00:40:35.925395 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:40:35.930713 | orchestrator | Friday 23 May 2025 00:40:35 +0000 (0:00:00.192) 0:00:17.952 ************ 2025-05-23 00:40:36.120347 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:40:36.123543 | orchestrator | 2025-05-23 00:40:36.123685 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:40:36.123756 | orchestrator | Friday 23 May 2025 00:40:36 +0000 (0:00:00.193) 0:00:18.146 ************ 2025-05-23 00:40:36.552747 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ab21c0a7-19ba-47fa-9bfa-a97fbae45af4) 2025-05-23 00:40:36.553018 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ab21c0a7-19ba-47fa-9bfa-a97fbae45af4) 2025-05-23 00:40:36.553341 | orchestrator | 2025-05-23 00:40:36.554110 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:40:36.558155 | orchestrator | Friday 23 May 2025 00:40:36 +0000 (0:00:00.432) 0:00:18.578 ************ 2025-05-23 00:40:36.931633 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2fc59eae-0e0c-4c3b-84f8-905b4655c6b7) 2025-05-23 00:40:36.931743 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2fc59eae-0e0c-4c3b-84f8-905b4655c6b7) 2025-05-23 00:40:36.932092 | orchestrator | 2025-05-23 00:40:36.932376 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:40:36.934383 | orchestrator | Friday 23 May 2025 00:40:36 +0000 (0:00:00.377) 0:00:18.956 ************ 2025-05-23 00:40:37.347813 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2ac02f21-3ef0-4f70-9ec3-b7448efc3652) 2025-05-23 00:40:37.348484 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2ac02f21-3ef0-4f70-9ec3-b7448efc3652) 2025-05-23 00:40:37.349339 | orchestrator | 2025-05-23 00:40:37.350121 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:40:37.350333 | orchestrator | Friday 23 May 2025 00:40:37 +0000 (0:00:00.417) 0:00:19.374 ************ 2025-05-23 00:40:37.830010 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_29f848a2-d495-4783-815a-7e69d4da9d2d) 2025-05-23 00:40:37.830236 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_29f848a2-d495-4783-815a-7e69d4da9d2d) 2025-05-23 00:40:37.830824 | orchestrator | 2025-05-23 00:40:37.833931 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:40:37.833951 | orchestrator | Friday 23 May 2025 00:40:37 +0000 (0:00:00.481) 0:00:19.856 ************ 2025-05-23 00:40:38.149200 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-23 00:40:38.149319 | orchestrator | 2025-05-23 00:40:38.149393 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:40:38.149498 | orchestrator | Friday 23 May 2025 00:40:38 +0000 (0:00:00.321) 0:00:20.177 ************ 2025-05-23 00:40:38.733342 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-05-23 00:40:38.734007 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-05-23 00:40:38.734747 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-05-23 00:40:38.737552 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-05-23 00:40:38.738339 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-05-23 00:40:38.739218 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-05-23 00:40:38.740069 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-05-23 00:40:38.740469 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-05-23 00:40:38.741197 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-05-23 00:40:38.741662 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-05-23 00:40:38.742184 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-05-23 00:40:38.742670 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-05-23 00:40:38.743090 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-05-23 00:40:38.743552 | orchestrator | 2025-05-23 00:40:38.744161 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:40:38.744406 | orchestrator | Friday 23 May 2025 00:40:38 +0000 (0:00:00.582) 0:00:20.759 ************ 2025-05-23 00:40:38.939385 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:40:38.939797 | orchestrator | 2025-05-23 00:40:38.940474 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:40:38.941457 | orchestrator | Friday 23 May 2025 00:40:38 +0000 (0:00:00.207) 0:00:20.966 ************ 2025-05-23 00:40:39.135775 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:40:39.136600 | orchestrator | 2025-05-23 00:40:39.137081 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:40:39.139945 | orchestrator | Friday 23 May 2025 00:40:39 +0000 (0:00:00.195) 0:00:21.162 ************ 2025-05-23 00:40:39.317139 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:40:39.317220 | orchestrator | 2025-05-23 00:40:39.318132 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:40:39.318521 | orchestrator | Friday 23 May 2025 00:40:39 +0000 (0:00:00.181) 0:00:21.343 ************ 2025-05-23 00:40:39.502447 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:40:39.506863 | orchestrator | 2025-05-23 00:40:39.506918 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:40:39.506941 | orchestrator | Friday 23 May 2025 00:40:39 +0000 (0:00:00.187) 0:00:21.530 ************ 2025-05-23 00:40:39.684881 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:40:39.684956 | orchestrator | 2025-05-23 00:40:39.684969 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:40:39.684982 | orchestrator | Friday 23 May 2025 00:40:39 +0000 (0:00:00.179) 0:00:21.710 ************ 2025-05-23 00:40:39.897864 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:40:39.898058 | orchestrator | 2025-05-23 00:40:39.898322 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:40:39.898703 | orchestrator | Friday 23 May 2025 00:40:39 +0000 (0:00:00.215) 0:00:21.925 ************ 2025-05-23 00:40:40.067504 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:40:40.068116 | orchestrator | 2025-05-23 00:40:40.069397 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:40:40.072958 | orchestrator | Friday 23 May 2025 00:40:40 +0000 (0:00:00.169) 0:00:22.094 ************ 2025-05-23 00:40:40.247446 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:40:40.248287 | orchestrator | 2025-05-23 00:40:40.249255 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:40:40.252749 | orchestrator | Friday 23 May 2025 00:40:40 +0000 (0:00:00.179) 0:00:22.274 ************ 2025-05-23 00:40:40.900657 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-05-23 00:40:40.903816 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-05-23 00:40:40.904444 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-05-23 00:40:40.905333 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-05-23 00:40:40.906180 | orchestrator | 2025-05-23 00:40:40.906629 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:40:40.907445 | orchestrator | Friday 23 May 2025 00:40:40 +0000 (0:00:00.650) 0:00:22.925 ************ 2025-05-23 00:40:41.413744 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:40:41.415161 | orchestrator | 2025-05-23 00:40:41.419626 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:40:41.421144 | orchestrator | Friday 23 May 2025 00:40:41 +0000 (0:00:00.514) 0:00:23.440 ************ 2025-05-23 00:40:41.626051 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:40:41.627444 | orchestrator | 2025-05-23 00:40:41.628075 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:40:41.628471 | orchestrator | Friday 23 May 2025 00:40:41 +0000 (0:00:00.214) 0:00:23.654 ************ 2025-05-23 00:40:41.808386 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:40:41.809913 | orchestrator | 2025-05-23 00:40:41.810203 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:40:41.812855 | orchestrator | Friday 23 May 2025 00:40:41 +0000 (0:00:00.180) 0:00:23.835 ************ 2025-05-23 00:40:41.987639 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:40:41.987729 | orchestrator | 2025-05-23 00:40:41.988687 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-23 00:40:41.988975 | orchestrator | Friday 23 May 2025 00:40:41 +0000 (0:00:00.179) 0:00:24.014 ************ 2025-05-23 00:40:42.139270 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-05-23 00:40:42.140284 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-05-23 00:40:42.140999 | orchestrator | 2025-05-23 00:40:42.141922 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-23 00:40:42.142694 | orchestrator | Friday 23 May 2025 00:40:42 +0000 (0:00:00.151) 0:00:24.165 ************ 2025-05-23 00:40:42.267674 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:40:42.268711 | orchestrator | 2025-05-23 00:40:42.269602 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-23 00:40:42.271521 | orchestrator | Friday 23 May 2025 00:40:42 +0000 (0:00:00.129) 0:00:24.295 ************ 2025-05-23 00:40:42.397906 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:40:42.398060 | orchestrator | 2025-05-23 00:40:42.398267 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-23 00:40:42.398299 | orchestrator | Friday 23 May 2025 00:40:42 +0000 (0:00:00.129) 0:00:24.424 ************ 2025-05-23 00:40:42.505555 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:40:42.505675 | orchestrator | 2025-05-23 00:40:42.506321 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-23 00:40:42.506626 | orchestrator | Friday 23 May 2025 00:40:42 +0000 (0:00:00.109) 0:00:24.533 ************ 2025-05-23 00:40:42.621995 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:40:42.622296 | orchestrator | 2025-05-23 00:40:42.622992 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-23 00:40:42.623287 | orchestrator | Friday 23 May 2025 00:40:42 +0000 (0:00:00.116) 0:00:24.649 ************ 2025-05-23 00:40:42.786663 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '125adf16-eac9-5ada-96e7-bcd4f30a545d'}}) 2025-05-23 00:40:42.787510 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8bf3a31b-2d76-5988-bbd2-6800630d4c9a'}}) 2025-05-23 00:40:42.788436 | orchestrator | 2025-05-23 00:40:42.792353 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-23 00:40:42.792391 | orchestrator | Friday 23 May 2025 00:40:42 +0000 (0:00:00.163) 0:00:24.813 ************ 2025-05-23 00:40:42.947176 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '125adf16-eac9-5ada-96e7-bcd4f30a545d'}})  2025-05-23 00:40:42.947381 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8bf3a31b-2d76-5988-bbd2-6800630d4c9a'}})  2025-05-23 00:40:42.947727 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:40:42.948415 | orchestrator | 2025-05-23 00:40:42.949292 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-23 00:40:42.951480 | orchestrator | Friday 23 May 2025 00:40:42 +0000 (0:00:00.160) 0:00:24.973 ************ 2025-05-23 00:40:43.112677 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '125adf16-eac9-5ada-96e7-bcd4f30a545d'}})  2025-05-23 00:40:43.112910 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8bf3a31b-2d76-5988-bbd2-6800630d4c9a'}})  2025-05-23 00:40:43.113944 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:40:43.114621 | orchestrator | 2025-05-23 00:40:43.115386 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-23 00:40:43.116006 | orchestrator | Friday 23 May 2025 00:40:43 +0000 (0:00:00.164) 0:00:25.138 ************ 2025-05-23 00:40:43.542564 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '125adf16-eac9-5ada-96e7-bcd4f30a545d'}})  2025-05-23 00:40:43.544144 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8bf3a31b-2d76-5988-bbd2-6800630d4c9a'}})  2025-05-23 00:40:43.544353 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:40:43.545271 | orchestrator | 2025-05-23 00:40:43.548500 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-23 00:40:43.549622 | orchestrator | Friday 23 May 2025 00:40:43 +0000 (0:00:00.429) 0:00:25.567 ************ 2025-05-23 00:40:43.699076 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:40:43.699310 | orchestrator | 2025-05-23 00:40:43.700719 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-23 00:40:43.701832 | orchestrator | Friday 23 May 2025 00:40:43 +0000 (0:00:00.155) 0:00:25.723 ************ 2025-05-23 00:40:43.856474 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:40:43.857533 | orchestrator | 2025-05-23 00:40:43.858486 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-23 00:40:43.859604 | orchestrator | Friday 23 May 2025 00:40:43 +0000 (0:00:00.158) 0:00:25.882 ************ 2025-05-23 00:40:44.003323 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:40:44.003972 | orchestrator | 2025-05-23 00:40:44.005106 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-23 00:40:44.006260 | orchestrator | Friday 23 May 2025 00:40:43 +0000 (0:00:00.145) 0:00:26.027 ************ 2025-05-23 00:40:44.155184 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:40:44.157114 | orchestrator | 2025-05-23 00:40:44.157467 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-23 00:40:44.158434 | orchestrator | Friday 23 May 2025 00:40:44 +0000 (0:00:00.152) 0:00:26.180 ************ 2025-05-23 00:40:44.295361 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:40:44.296683 | orchestrator | 2025-05-23 00:40:44.297543 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-23 00:40:44.298660 | orchestrator | Friday 23 May 2025 00:40:44 +0000 (0:00:00.141) 0:00:26.321 ************ 2025-05-23 00:40:44.440298 | orchestrator | ok: [testbed-node-4] => { 2025-05-23 00:40:44.441628 | orchestrator |  "ceph_osd_devices": { 2025-05-23 00:40:44.443262 | orchestrator |  "sdb": { 2025-05-23 00:40:44.444435 | orchestrator |  "osd_lvm_uuid": "125adf16-eac9-5ada-96e7-bcd4f30a545d" 2025-05-23 00:40:44.445367 | orchestrator |  }, 2025-05-23 00:40:44.445875 | orchestrator |  "sdc": { 2025-05-23 00:40:44.446436 | orchestrator |  "osd_lvm_uuid": "8bf3a31b-2d76-5988-bbd2-6800630d4c9a" 2025-05-23 00:40:44.447058 | orchestrator |  } 2025-05-23 00:40:44.447706 | orchestrator |  } 2025-05-23 00:40:44.448242 | orchestrator | } 2025-05-23 00:40:44.448966 | orchestrator | 2025-05-23 00:40:44.449099 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-23 00:40:44.449373 | orchestrator | Friday 23 May 2025 00:40:44 +0000 (0:00:00.145) 0:00:26.467 ************ 2025-05-23 00:40:44.590998 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:40:44.592555 | orchestrator | 2025-05-23 00:40:44.593381 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-23 00:40:44.595298 | orchestrator | Friday 23 May 2025 00:40:44 +0000 (0:00:00.148) 0:00:26.616 ************ 2025-05-23 00:40:44.733265 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:40:44.733457 | orchestrator | 2025-05-23 00:40:44.734149 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-23 00:40:44.735303 | orchestrator | Friday 23 May 2025 00:40:44 +0000 (0:00:00.143) 0:00:26.759 ************ 2025-05-23 00:40:44.865485 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:40:44.866455 | orchestrator | 2025-05-23 00:40:44.866830 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-23 00:40:44.870898 | orchestrator | Friday 23 May 2025 00:40:44 +0000 (0:00:00.131) 0:00:26.891 ************ 2025-05-23 00:40:45.309956 | orchestrator | changed: [testbed-node-4] => { 2025-05-23 00:40:45.310186 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-23 00:40:45.311075 | orchestrator |  "ceph_osd_devices": { 2025-05-23 00:40:45.312167 | orchestrator |  "sdb": { 2025-05-23 00:40:45.313385 | orchestrator |  "osd_lvm_uuid": "125adf16-eac9-5ada-96e7-bcd4f30a545d" 2025-05-23 00:40:45.313409 | orchestrator |  }, 2025-05-23 00:40:45.313720 | orchestrator |  "sdc": { 2025-05-23 00:40:45.314792 | orchestrator |  "osd_lvm_uuid": "8bf3a31b-2d76-5988-bbd2-6800630d4c9a" 2025-05-23 00:40:45.315226 | orchestrator |  } 2025-05-23 00:40:45.316275 | orchestrator |  }, 2025-05-23 00:40:45.316951 | orchestrator |  "lvm_volumes": [ 2025-05-23 00:40:45.317362 | orchestrator |  { 2025-05-23 00:40:45.318192 | orchestrator |  "data": "osd-block-125adf16-eac9-5ada-96e7-bcd4f30a545d", 2025-05-23 00:40:45.318663 | orchestrator |  "data_vg": "ceph-125adf16-eac9-5ada-96e7-bcd4f30a545d" 2025-05-23 00:40:45.319228 | orchestrator |  }, 2025-05-23 00:40:45.319696 | orchestrator |  { 2025-05-23 00:40:45.320281 | orchestrator |  "data": "osd-block-8bf3a31b-2d76-5988-bbd2-6800630d4c9a", 2025-05-23 00:40:45.321374 | orchestrator |  "data_vg": "ceph-8bf3a31b-2d76-5988-bbd2-6800630d4c9a" 2025-05-23 00:40:45.321396 | orchestrator |  } 2025-05-23 00:40:45.321463 | orchestrator |  ] 2025-05-23 00:40:45.321782 | orchestrator |  } 2025-05-23 00:40:45.322227 | orchestrator | } 2025-05-23 00:40:45.322684 | orchestrator | 2025-05-23 00:40:45.323464 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-23 00:40:45.323794 | orchestrator | Friday 23 May 2025 00:40:45 +0000 (0:00:00.442) 0:00:27.334 ************ 2025-05-23 00:40:46.644950 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-23 00:40:46.645058 | orchestrator | 2025-05-23 00:40:46.646629 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-23 00:40:46.648644 | orchestrator | 2025-05-23 00:40:46.648993 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-23 00:40:46.651139 | orchestrator | Friday 23 May 2025 00:40:46 +0000 (0:00:01.335) 0:00:28.669 ************ 2025-05-23 00:40:46.889072 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-23 00:40:46.889709 | orchestrator | 2025-05-23 00:40:46.890627 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-23 00:40:46.891739 | orchestrator | Friday 23 May 2025 00:40:46 +0000 (0:00:00.246) 0:00:28.915 ************ 2025-05-23 00:40:47.113811 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:40:47.114180 | orchestrator | 2025-05-23 00:40:47.115069 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:40:47.117763 | orchestrator | Friday 23 May 2025 00:40:47 +0000 (0:00:00.224) 0:00:29.140 ************ 2025-05-23 00:40:47.696344 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-05-23 00:40:47.696668 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-05-23 00:40:47.697237 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-05-23 00:40:47.699193 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-05-23 00:40:47.703661 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-05-23 00:40:47.703756 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-05-23 00:40:47.703772 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-05-23 00:40:47.704485 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-05-23 00:40:47.704510 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-05-23 00:40:47.705209 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-05-23 00:40:47.705490 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-05-23 00:40:47.705852 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-05-23 00:40:47.706440 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-05-23 00:40:47.707549 | orchestrator | 2025-05-23 00:40:47.707863 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:40:47.708337 | orchestrator | Friday 23 May 2025 00:40:47 +0000 (0:00:00.582) 0:00:29.722 ************ 2025-05-23 00:40:47.936511 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:40:47.936889 | orchestrator | 2025-05-23 00:40:47.937489 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:40:47.938515 | orchestrator | Friday 23 May 2025 00:40:47 +0000 (0:00:00.240) 0:00:29.963 ************ 2025-05-23 00:40:48.159661 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:40:48.159760 | orchestrator | 2025-05-23 00:40:48.160507 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:40:48.160777 | orchestrator | Friday 23 May 2025 00:40:48 +0000 (0:00:00.221) 0:00:30.184 ************ 2025-05-23 00:40:48.362179 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:40:48.362719 | orchestrator | 2025-05-23 00:40:48.363390 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:40:48.364433 | orchestrator | Friday 23 May 2025 00:40:48 +0000 (0:00:00.203) 0:00:30.388 ************ 2025-05-23 00:40:48.577469 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:40:48.578330 | orchestrator | 2025-05-23 00:40:48.579418 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:40:48.582654 | orchestrator | Friday 23 May 2025 00:40:48 +0000 (0:00:00.215) 0:00:30.603 ************ 2025-05-23 00:40:48.784636 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:40:48.786005 | orchestrator | 2025-05-23 00:40:48.786533 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:40:48.788291 | orchestrator | Friday 23 May 2025 00:40:48 +0000 (0:00:00.205) 0:00:30.809 ************ 2025-05-23 00:40:48.975643 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:40:48.976559 | orchestrator | 2025-05-23 00:40:48.977403 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:40:48.978138 | orchestrator | Friday 23 May 2025 00:40:48 +0000 (0:00:00.192) 0:00:31.002 ************ 2025-05-23 00:40:49.178448 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:40:49.179140 | orchestrator | 2025-05-23 00:40:49.180292 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:40:49.180317 | orchestrator | Friday 23 May 2025 00:40:49 +0000 (0:00:00.201) 0:00:31.203 ************ 2025-05-23 00:40:49.391756 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:40:49.391852 | orchestrator | 2025-05-23 00:40:49.391991 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:40:49.392011 | orchestrator | Friday 23 May 2025 00:40:49 +0000 (0:00:00.214) 0:00:31.418 ************ 2025-05-23 00:40:50.087200 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_c8efe3c1-6307-4e01-8bfc-afd4fa6a2572) 2025-05-23 00:40:50.087360 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_c8efe3c1-6307-4e01-8bfc-afd4fa6a2572) 2025-05-23 00:40:50.087786 | orchestrator | 2025-05-23 00:40:50.087943 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:40:50.088426 | orchestrator | Friday 23 May 2025 00:40:50 +0000 (0:00:00.693) 0:00:32.112 ************ 2025-05-23 00:40:50.905148 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_18473d69-2fd0-4937-9240-f5fad34c2ed7) 2025-05-23 00:40:50.906150 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_18473d69-2fd0-4937-9240-f5fad34c2ed7) 2025-05-23 00:40:50.907040 | orchestrator | 2025-05-23 00:40:50.911526 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:40:50.911925 | orchestrator | Friday 23 May 2025 00:40:50 +0000 (0:00:00.819) 0:00:32.931 ************ 2025-05-23 00:40:51.374152 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5f24398e-55ab-4e45-a360-e924ed2b4127) 2025-05-23 00:40:51.374238 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5f24398e-55ab-4e45-a360-e924ed2b4127) 2025-05-23 00:40:51.375258 | orchestrator | 2025-05-23 00:40:51.375278 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:40:51.375288 | orchestrator | Friday 23 May 2025 00:40:51 +0000 (0:00:00.465) 0:00:33.397 ************ 2025-05-23 00:40:51.829192 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_329d29a6-e648-44c1-9803-5cc5abc56db6) 2025-05-23 00:40:51.830722 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_329d29a6-e648-44c1-9803-5cc5abc56db6) 2025-05-23 00:40:51.831707 | orchestrator | 2025-05-23 00:40:51.832733 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:40:51.833156 | orchestrator | Friday 23 May 2025 00:40:51 +0000 (0:00:00.457) 0:00:33.854 ************ 2025-05-23 00:40:52.160656 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-23 00:40:52.161806 | orchestrator | 2025-05-23 00:40:52.162753 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:40:52.163635 | orchestrator | Friday 23 May 2025 00:40:52 +0000 (0:00:00.331) 0:00:34.185 ************ 2025-05-23 00:40:52.569686 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-05-23 00:40:52.571072 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-05-23 00:40:52.572457 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-05-23 00:40:52.573696 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-05-23 00:40:52.575119 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-05-23 00:40:52.576047 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-05-23 00:40:52.577462 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-05-23 00:40:52.578629 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-05-23 00:40:52.579515 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-05-23 00:40:52.580108 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-05-23 00:40:52.582474 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-05-23 00:40:52.582845 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-05-23 00:40:52.584151 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-05-23 00:40:52.584357 | orchestrator | 2025-05-23 00:40:52.585285 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:40:52.585908 | orchestrator | Friday 23 May 2025 00:40:52 +0000 (0:00:00.409) 0:00:34.595 ************ 2025-05-23 00:40:52.784857 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:40:52.785451 | orchestrator | 2025-05-23 00:40:52.786299 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:40:52.787186 | orchestrator | Friday 23 May 2025 00:40:52 +0000 (0:00:00.216) 0:00:34.812 ************ 2025-05-23 00:40:52.991579 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:40:52.991999 | orchestrator | 2025-05-23 00:40:52.992880 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:40:52.993862 | orchestrator | Friday 23 May 2025 00:40:52 +0000 (0:00:00.204) 0:00:35.016 ************ 2025-05-23 00:40:53.210073 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:40:53.211190 | orchestrator | 2025-05-23 00:40:53.212324 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:40:53.214441 | orchestrator | Friday 23 May 2025 00:40:53 +0000 (0:00:00.219) 0:00:35.236 ************ 2025-05-23 00:40:53.398726 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:40:53.399150 | orchestrator | 2025-05-23 00:40:53.400090 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:40:53.400681 | orchestrator | Friday 23 May 2025 00:40:53 +0000 (0:00:00.189) 0:00:35.425 ************ 2025-05-23 00:40:53.978487 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:40:53.979016 | orchestrator | 2025-05-23 00:40:53.981764 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:40:53.981865 | orchestrator | Friday 23 May 2025 00:40:53 +0000 (0:00:00.577) 0:00:36.003 ************ 2025-05-23 00:40:54.183178 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:40:54.183786 | orchestrator | 2025-05-23 00:40:54.183826 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:40:54.183881 | orchestrator | Friday 23 May 2025 00:40:54 +0000 (0:00:00.206) 0:00:36.209 ************ 2025-05-23 00:40:54.400489 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:40:54.400820 | orchestrator | 2025-05-23 00:40:54.401740 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:40:54.402340 | orchestrator | Friday 23 May 2025 00:40:54 +0000 (0:00:00.216) 0:00:36.426 ************ 2025-05-23 00:40:54.606726 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:40:54.607076 | orchestrator | 2025-05-23 00:40:54.607606 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:40:54.610535 | orchestrator | Friday 23 May 2025 00:40:54 +0000 (0:00:00.205) 0:00:36.632 ************ 2025-05-23 00:40:55.274178 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-05-23 00:40:55.274697 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-05-23 00:40:55.276309 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-05-23 00:40:55.276782 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-05-23 00:40:55.278139 | orchestrator | 2025-05-23 00:40:55.278913 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:40:55.279771 | orchestrator | Friday 23 May 2025 00:40:55 +0000 (0:00:00.666) 0:00:37.298 ************ 2025-05-23 00:40:55.471847 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:40:55.472472 | orchestrator | 2025-05-23 00:40:55.473087 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:40:55.473994 | orchestrator | Friday 23 May 2025 00:40:55 +0000 (0:00:00.198) 0:00:37.497 ************ 2025-05-23 00:40:55.674855 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:40:55.675177 | orchestrator | 2025-05-23 00:40:55.675611 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:40:55.677209 | orchestrator | Friday 23 May 2025 00:40:55 +0000 (0:00:00.203) 0:00:37.701 ************ 2025-05-23 00:40:55.905353 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:40:55.906831 | orchestrator | 2025-05-23 00:40:55.907032 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:40:55.908075 | orchestrator | Friday 23 May 2025 00:40:55 +0000 (0:00:00.229) 0:00:37.930 ************ 2025-05-23 00:40:56.098967 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:40:56.099185 | orchestrator | 2025-05-23 00:40:56.100268 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-23 00:40:56.100666 | orchestrator | Friday 23 May 2025 00:40:56 +0000 (0:00:00.194) 0:00:38.124 ************ 2025-05-23 00:40:56.285525 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-05-23 00:40:56.285759 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-05-23 00:40:56.287046 | orchestrator | 2025-05-23 00:40:56.287234 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-23 00:40:56.287574 | orchestrator | Friday 23 May 2025 00:40:56 +0000 (0:00:00.186) 0:00:38.311 ************ 2025-05-23 00:40:56.423802 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:40:56.424584 | orchestrator | 2025-05-23 00:40:56.425679 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-23 00:40:56.426324 | orchestrator | Friday 23 May 2025 00:40:56 +0000 (0:00:00.139) 0:00:38.450 ************ 2025-05-23 00:40:56.749777 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:40:56.750657 | orchestrator | 2025-05-23 00:40:56.754294 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-23 00:40:56.754338 | orchestrator | Friday 23 May 2025 00:40:56 +0000 (0:00:00.324) 0:00:38.775 ************ 2025-05-23 00:40:56.904636 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:40:56.908577 | orchestrator | 2025-05-23 00:40:56.908735 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-23 00:40:56.908823 | orchestrator | Friday 23 May 2025 00:40:56 +0000 (0:00:00.154) 0:00:38.930 ************ 2025-05-23 00:40:57.051484 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:40:57.053009 | orchestrator | 2025-05-23 00:40:57.053507 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-23 00:40:57.054390 | orchestrator | Friday 23 May 2025 00:40:57 +0000 (0:00:00.147) 0:00:39.078 ************ 2025-05-23 00:40:57.225980 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1c1d7620-81eb-54f7-8ffb-e9df7a8995e0'}}) 2025-05-23 00:40:57.226884 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'dafe69f8-630b-5486-ba76-590e0b4d1820'}}) 2025-05-23 00:40:57.227788 | orchestrator | 2025-05-23 00:40:57.228123 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-23 00:40:57.228847 | orchestrator | Friday 23 May 2025 00:40:57 +0000 (0:00:00.173) 0:00:39.252 ************ 2025-05-23 00:40:57.375804 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1c1d7620-81eb-54f7-8ffb-e9df7a8995e0'}})  2025-05-23 00:40:57.375953 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'dafe69f8-630b-5486-ba76-590e0b4d1820'}})  2025-05-23 00:40:57.377121 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:40:57.378155 | orchestrator | 2025-05-23 00:40:57.379229 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-23 00:40:57.379716 | orchestrator | Friday 23 May 2025 00:40:57 +0000 (0:00:00.149) 0:00:39.401 ************ 2025-05-23 00:40:57.574499 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1c1d7620-81eb-54f7-8ffb-e9df7a8995e0'}})  2025-05-23 00:40:57.574837 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'dafe69f8-630b-5486-ba76-590e0b4d1820'}})  2025-05-23 00:40:57.575344 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:40:57.576548 | orchestrator | 2025-05-23 00:40:57.577091 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-23 00:40:57.580181 | orchestrator | Friday 23 May 2025 00:40:57 +0000 (0:00:00.199) 0:00:39.601 ************ 2025-05-23 00:40:57.738711 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1c1d7620-81eb-54f7-8ffb-e9df7a8995e0'}})  2025-05-23 00:40:57.738892 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'dafe69f8-630b-5486-ba76-590e0b4d1820'}})  2025-05-23 00:40:57.739530 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:40:57.740493 | orchestrator | 2025-05-23 00:40:57.740725 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-23 00:40:57.741459 | orchestrator | Friday 23 May 2025 00:40:57 +0000 (0:00:00.164) 0:00:39.765 ************ 2025-05-23 00:40:57.885936 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:40:57.886653 | orchestrator | 2025-05-23 00:40:57.887909 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-23 00:40:57.888469 | orchestrator | Friday 23 May 2025 00:40:57 +0000 (0:00:00.147) 0:00:39.912 ************ 2025-05-23 00:40:58.044749 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:40:58.044854 | orchestrator | 2025-05-23 00:40:58.044872 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-23 00:40:58.044886 | orchestrator | Friday 23 May 2025 00:40:58 +0000 (0:00:00.152) 0:00:40.064 ************ 2025-05-23 00:40:58.198813 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:40:58.199529 | orchestrator | 2025-05-23 00:40:58.200304 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-23 00:40:58.200654 | orchestrator | Friday 23 May 2025 00:40:58 +0000 (0:00:00.160) 0:00:40.225 ************ 2025-05-23 00:40:58.342085 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:40:58.342270 | orchestrator | 2025-05-23 00:40:58.342412 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-23 00:40:58.342987 | orchestrator | Friday 23 May 2025 00:40:58 +0000 (0:00:00.142) 0:00:40.367 ************ 2025-05-23 00:40:58.686218 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:40:58.686334 | orchestrator | 2025-05-23 00:40:58.686426 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-23 00:40:58.687040 | orchestrator | Friday 23 May 2025 00:40:58 +0000 (0:00:00.344) 0:00:40.712 ************ 2025-05-23 00:40:58.846087 | orchestrator | ok: [testbed-node-5] => { 2025-05-23 00:40:58.846327 | orchestrator |  "ceph_osd_devices": { 2025-05-23 00:40:58.846946 | orchestrator |  "sdb": { 2025-05-23 00:40:58.847877 | orchestrator |  "osd_lvm_uuid": "1c1d7620-81eb-54f7-8ffb-e9df7a8995e0" 2025-05-23 00:40:58.848800 | orchestrator |  }, 2025-05-23 00:40:58.850079 | orchestrator |  "sdc": { 2025-05-23 00:40:58.850321 | orchestrator |  "osd_lvm_uuid": "dafe69f8-630b-5486-ba76-590e0b4d1820" 2025-05-23 00:40:58.851325 | orchestrator |  } 2025-05-23 00:40:58.851782 | orchestrator |  } 2025-05-23 00:40:58.852242 | orchestrator | } 2025-05-23 00:40:58.852808 | orchestrator | 2025-05-23 00:40:58.853738 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-23 00:40:58.853937 | orchestrator | Friday 23 May 2025 00:40:58 +0000 (0:00:00.159) 0:00:40.871 ************ 2025-05-23 00:40:58.975670 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:40:58.976184 | orchestrator | 2025-05-23 00:40:58.977096 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-23 00:40:58.977969 | orchestrator | Friday 23 May 2025 00:40:58 +0000 (0:00:00.129) 0:00:41.001 ************ 2025-05-23 00:40:59.119783 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:40:59.120652 | orchestrator | 2025-05-23 00:40:59.120910 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-23 00:40:59.122136 | orchestrator | Friday 23 May 2025 00:40:59 +0000 (0:00:00.144) 0:00:41.145 ************ 2025-05-23 00:40:59.256759 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:40:59.257130 | orchestrator | 2025-05-23 00:40:59.257398 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-23 00:40:59.258064 | orchestrator | Friday 23 May 2025 00:40:59 +0000 (0:00:00.136) 0:00:41.282 ************ 2025-05-23 00:40:59.520648 | orchestrator | changed: [testbed-node-5] => { 2025-05-23 00:40:59.522581 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-23 00:40:59.523971 | orchestrator |  "ceph_osd_devices": { 2025-05-23 00:40:59.524659 | orchestrator |  "sdb": { 2025-05-23 00:40:59.525339 | orchestrator |  "osd_lvm_uuid": "1c1d7620-81eb-54f7-8ffb-e9df7a8995e0" 2025-05-23 00:40:59.526095 | orchestrator |  }, 2025-05-23 00:40:59.526813 | orchestrator |  "sdc": { 2025-05-23 00:40:59.527745 | orchestrator |  "osd_lvm_uuid": "dafe69f8-630b-5486-ba76-590e0b4d1820" 2025-05-23 00:40:59.528300 | orchestrator |  } 2025-05-23 00:40:59.528964 | orchestrator |  }, 2025-05-23 00:40:59.532295 | orchestrator |  "lvm_volumes": [ 2025-05-23 00:40:59.533742 | orchestrator |  { 2025-05-23 00:40:59.534388 | orchestrator |  "data": "osd-block-1c1d7620-81eb-54f7-8ffb-e9df7a8995e0", 2025-05-23 00:40:59.534816 | orchestrator |  "data_vg": "ceph-1c1d7620-81eb-54f7-8ffb-e9df7a8995e0" 2025-05-23 00:40:59.535302 | orchestrator |  }, 2025-05-23 00:40:59.535802 | orchestrator |  { 2025-05-23 00:40:59.536252 | orchestrator |  "data": "osd-block-dafe69f8-630b-5486-ba76-590e0b4d1820", 2025-05-23 00:40:59.536528 | orchestrator |  "data_vg": "ceph-dafe69f8-630b-5486-ba76-590e0b4d1820" 2025-05-23 00:40:59.536920 | orchestrator |  } 2025-05-23 00:40:59.537191 | orchestrator |  ] 2025-05-23 00:40:59.537700 | orchestrator |  } 2025-05-23 00:40:59.538100 | orchestrator | } 2025-05-23 00:40:59.538314 | orchestrator | 2025-05-23 00:40:59.538564 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-23 00:40:59.538836 | orchestrator | Friday 23 May 2025 00:40:59 +0000 (0:00:00.263) 0:00:41.546 ************ 2025-05-23 00:41:00.652642 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-23 00:41:00.653018 | orchestrator | 2025-05-23 00:41:00.654504 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 00:41:00.655553 | orchestrator | 2025-05-23 00:41:00 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-23 00:41:00.655620 | orchestrator | 2025-05-23 00:41:00 | INFO  | Please wait and do not abort execution. 2025-05-23 00:41:00.656213 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-23 00:41:00.656862 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-23 00:41:00.657987 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-23 00:41:00.658955 | orchestrator | 2025-05-23 00:41:00.660088 | orchestrator | 2025-05-23 00:41:00.660695 | orchestrator | 2025-05-23 00:41:00.661077 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-23 00:41:00.661956 | orchestrator | Friday 23 May 2025 00:41:00 +0000 (0:00:01.130) 0:00:42.677 ************ 2025-05-23 00:41:00.662952 | orchestrator | =============================================================================== 2025-05-23 00:41:00.663779 | orchestrator | Write configuration file ------------------------------------------------ 4.63s 2025-05-23 00:41:00.663935 | orchestrator | Add known links to the list of available block devices ------------------ 1.78s 2025-05-23 00:41:00.664946 | orchestrator | Add known partitions to the list of available block devices ------------- 1.38s 2025-05-23 00:41:00.665428 | orchestrator | Print configuration data ------------------------------------------------ 0.99s 2025-05-23 00:41:00.666161 | orchestrator | Add known links to the list of available block devices ------------------ 0.82s 2025-05-23 00:41:00.666294 | orchestrator | Add known links to the list of available block devices ------------------ 0.81s 2025-05-23 00:41:00.666836 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.80s 2025-05-23 00:41:00.667433 | orchestrator | Generate lvm_volumes structure (block + db + wal) ----------------------- 0.78s 2025-05-23 00:41:00.667736 | orchestrator | Generate DB VG names ---------------------------------------------------- 0.72s 2025-05-23 00:41:00.668185 | orchestrator | Get initial list of available block devices ----------------------------- 0.71s 2025-05-23 00:41:00.668778 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2025-05-23 00:41:00.668957 | orchestrator | Set DB+WAL devices config data ------------------------------------------ 0.67s 2025-05-23 00:41:00.669564 | orchestrator | Add known partitions to the list of available block devices ------------- 0.67s 2025-05-23 00:41:00.669855 | orchestrator | Print ceph_osd_devices -------------------------------------------------- 0.66s 2025-05-23 00:41:00.670231 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2025-05-23 00:41:00.670611 | orchestrator | Generate lvm_volumes structure (block only) ----------------------------- 0.61s 2025-05-23 00:41:00.670990 | orchestrator | Add known links to the list of available block devices ------------------ 0.60s 2025-05-23 00:41:00.671377 | orchestrator | Add known partitions to the list of available block devices ------------- 0.60s 2025-05-23 00:41:00.671922 | orchestrator | Add known partitions to the list of available block devices ------------- 0.58s 2025-05-23 00:41:00.672222 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.58s 2025-05-23 00:41:12.771201 | orchestrator | 2025-05-23 00:41:12 | INFO  | Task 7e34fa1a-eb36-4fa0-8974-b3a6cb918748 is running in background. Output coming soon. 2025-05-23 00:41:37.206492 | orchestrator | 2025-05-23 00:41:28 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-05-23 00:41:37.206613 | orchestrator | 2025-05-23 00:41:28 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-05-23 00:41:37.206697 | orchestrator | 2025-05-23 00:41:28 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-05-23 00:41:37.206714 | orchestrator | 2025-05-23 00:41:29 | INFO  | Handling group overwrites in 99-overwrite 2025-05-23 00:41:37.206726 | orchestrator | 2025-05-23 00:41:29 | INFO  | Removing group frr:children from 60-generic 2025-05-23 00:41:37.206738 | orchestrator | 2025-05-23 00:41:29 | INFO  | Removing group storage:children from 50-kolla 2025-05-23 00:41:37.206749 | orchestrator | 2025-05-23 00:41:29 | INFO  | Removing group netbird:children from 50-infrastruture 2025-05-23 00:41:37.206760 | orchestrator | 2025-05-23 00:41:29 | INFO  | Removing group ceph-mds from 50-ceph 2025-05-23 00:41:37.206786 | orchestrator | 2025-05-23 00:41:29 | INFO  | Removing group ceph-rgw from 50-ceph 2025-05-23 00:41:37.206797 | orchestrator | 2025-05-23 00:41:29 | INFO  | Handling group overwrites in 20-roles 2025-05-23 00:41:37.206808 | orchestrator | 2025-05-23 00:41:29 | INFO  | Removing group k3s_node from 50-infrastruture 2025-05-23 00:41:37.206819 | orchestrator | 2025-05-23 00:41:29 | INFO  | File 20-netbox not found in /inventory.pre/ 2025-05-23 00:41:37.206830 | orchestrator | 2025-05-23 00:41:37 | INFO  | Writing /inventory/clustershell/ansible.yaml with clustershell groups 2025-05-23 00:41:38.826377 | orchestrator | 2025-05-23 00:41:38 | INFO  | Task 9bafdd61-de1e-4aef-8689-5176e6f6f893 (ceph-create-lvm-devices) was prepared for execution. 2025-05-23 00:41:38.826477 | orchestrator | 2025-05-23 00:41:38 | INFO  | It takes a moment until task 9bafdd61-de1e-4aef-8689-5176e6f6f893 (ceph-create-lvm-devices) has been started and output is visible here. 2025-05-23 00:41:41.606211 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-23 00:41:42.045249 | orchestrator | 2025-05-23 00:41:42.045618 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-23 00:41:42.049726 | orchestrator | 2025-05-23 00:41:42.050554 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-23 00:41:42.051098 | orchestrator | Friday 23 May 2025 00:41:42 +0000 (0:00:00.381) 0:00:00.381 ************ 2025-05-23 00:41:42.265684 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-23 00:41:42.267425 | orchestrator | 2025-05-23 00:41:42.268167 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-23 00:41:42.268762 | orchestrator | Friday 23 May 2025 00:41:42 +0000 (0:00:00.221) 0:00:00.603 ************ 2025-05-23 00:41:42.476018 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:41:42.479123 | orchestrator | 2025-05-23 00:41:42.479622 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:41:42.480755 | orchestrator | Friday 23 May 2025 00:41:42 +0000 (0:00:00.209) 0:00:00.812 ************ 2025-05-23 00:41:43.091762 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-05-23 00:41:43.091944 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-05-23 00:41:43.093491 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-05-23 00:41:43.094088 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-05-23 00:41:43.094980 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-05-23 00:41:43.095507 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-05-23 00:41:43.096292 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-05-23 00:41:43.096822 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-05-23 00:41:43.097394 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-05-23 00:41:43.097848 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-05-23 00:41:43.098240 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-05-23 00:41:43.098616 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-05-23 00:41:43.099128 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-05-23 00:41:43.099515 | orchestrator | 2025-05-23 00:41:43.099936 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:41:43.100520 | orchestrator | Friday 23 May 2025 00:41:43 +0000 (0:00:00.616) 0:00:01.429 ************ 2025-05-23 00:41:43.263878 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:41:43.263961 | orchestrator | 2025-05-23 00:41:43.265743 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:41:43.266392 | orchestrator | Friday 23 May 2025 00:41:43 +0000 (0:00:00.170) 0:00:01.599 ************ 2025-05-23 00:41:43.444512 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:41:43.444596 | orchestrator | 2025-05-23 00:41:43.445379 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:41:43.445511 | orchestrator | Friday 23 May 2025 00:41:43 +0000 (0:00:00.182) 0:00:01.782 ************ 2025-05-23 00:41:43.641214 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:41:43.642083 | orchestrator | 2025-05-23 00:41:43.645185 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:41:43.646706 | orchestrator | Friday 23 May 2025 00:41:43 +0000 (0:00:00.195) 0:00:01.978 ************ 2025-05-23 00:41:43.827804 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:41:43.827894 | orchestrator | 2025-05-23 00:41:43.827909 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:41:43.828120 | orchestrator | Friday 23 May 2025 00:41:43 +0000 (0:00:00.186) 0:00:02.164 ************ 2025-05-23 00:41:44.009621 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:41:44.009767 | orchestrator | 2025-05-23 00:41:44.009783 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:41:44.010194 | orchestrator | Friday 23 May 2025 00:41:44 +0000 (0:00:00.181) 0:00:02.346 ************ 2025-05-23 00:41:44.198889 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:41:44.199295 | orchestrator | 2025-05-23 00:41:44.199957 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:41:44.200666 | orchestrator | Friday 23 May 2025 00:41:44 +0000 (0:00:00.190) 0:00:02.536 ************ 2025-05-23 00:41:44.389104 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:41:44.389580 | orchestrator | 2025-05-23 00:41:44.390728 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:41:44.391267 | orchestrator | Friday 23 May 2025 00:41:44 +0000 (0:00:00.190) 0:00:02.727 ************ 2025-05-23 00:41:44.566728 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:41:44.566811 | orchestrator | 2025-05-23 00:41:44.567395 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:41:44.568031 | orchestrator | Friday 23 May 2025 00:41:44 +0000 (0:00:00.174) 0:00:02.902 ************ 2025-05-23 00:41:45.067906 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e91133d1-5a4c-4c6b-aae9-a3102c4d2118) 2025-05-23 00:41:45.068368 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e91133d1-5a4c-4c6b-aae9-a3102c4d2118) 2025-05-23 00:41:45.069557 | orchestrator | 2025-05-23 00:41:45.070231 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:41:45.071012 | orchestrator | Friday 23 May 2025 00:41:45 +0000 (0:00:00.503) 0:00:03.405 ************ 2025-05-23 00:41:45.707425 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3c0d7b27-8ebd-4816-b389-8c3a005395e5) 2025-05-23 00:41:45.710425 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3c0d7b27-8ebd-4816-b389-8c3a005395e5) 2025-05-23 00:41:45.710475 | orchestrator | 2025-05-23 00:41:45.710489 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:41:45.710759 | orchestrator | Friday 23 May 2025 00:41:45 +0000 (0:00:00.638) 0:00:04.044 ************ 2025-05-23 00:41:46.083190 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_eb878625-a80c-49f3-a757-e0a303c4dd75) 2025-05-23 00:41:46.085232 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_eb878625-a80c-49f3-a757-e0a303c4dd75) 2025-05-23 00:41:46.085266 | orchestrator | 2025-05-23 00:41:46.087228 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:41:46.087254 | orchestrator | Friday 23 May 2025 00:41:46 +0000 (0:00:00.376) 0:00:04.420 ************ 2025-05-23 00:41:46.456969 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_252b3cc1-c875-426d-9475-c1c0edf2ac3c) 2025-05-23 00:41:46.459412 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_252b3cc1-c875-426d-9475-c1c0edf2ac3c) 2025-05-23 00:41:46.459444 | orchestrator | 2025-05-23 00:41:46.459457 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:41:46.459815 | orchestrator | Friday 23 May 2025 00:41:46 +0000 (0:00:00.372) 0:00:04.793 ************ 2025-05-23 00:41:46.756197 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-23 00:41:46.758194 | orchestrator | 2025-05-23 00:41:46.758367 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:41:46.759283 | orchestrator | Friday 23 May 2025 00:41:46 +0000 (0:00:00.299) 0:00:05.093 ************ 2025-05-23 00:41:47.178304 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-05-23 00:41:47.180951 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-05-23 00:41:47.180966 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-05-23 00:41:47.180971 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-05-23 00:41:47.180976 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-05-23 00:41:47.180981 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-05-23 00:41:47.183270 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-05-23 00:41:47.183291 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-05-23 00:41:47.183297 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-05-23 00:41:47.183303 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-05-23 00:41:47.183309 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-05-23 00:41:47.183314 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-05-23 00:41:47.183391 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-05-23 00:41:47.185237 | orchestrator | 2025-05-23 00:41:47.185292 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:41:47.185389 | orchestrator | Friday 23 May 2025 00:41:47 +0000 (0:00:00.422) 0:00:05.516 ************ 2025-05-23 00:41:47.375001 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:41:47.375927 | orchestrator | 2025-05-23 00:41:47.375959 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:41:47.377181 | orchestrator | Friday 23 May 2025 00:41:47 +0000 (0:00:00.194) 0:00:05.710 ************ 2025-05-23 00:41:47.602758 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:41:47.602999 | orchestrator | 2025-05-23 00:41:47.603948 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:41:47.604801 | orchestrator | Friday 23 May 2025 00:41:47 +0000 (0:00:00.228) 0:00:05.939 ************ 2025-05-23 00:41:47.810149 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:41:47.811355 | orchestrator | 2025-05-23 00:41:47.811395 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:41:47.812105 | orchestrator | Friday 23 May 2025 00:41:47 +0000 (0:00:00.207) 0:00:06.146 ************ 2025-05-23 00:41:48.028047 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:41:48.029573 | orchestrator | 2025-05-23 00:41:48.029933 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:41:48.030303 | orchestrator | Friday 23 May 2025 00:41:48 +0000 (0:00:00.217) 0:00:06.364 ************ 2025-05-23 00:41:48.669986 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:41:48.670188 | orchestrator | 2025-05-23 00:41:48.670311 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:41:48.670687 | orchestrator | Friday 23 May 2025 00:41:48 +0000 (0:00:00.641) 0:00:07.005 ************ 2025-05-23 00:41:48.906187 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:41:48.906292 | orchestrator | 2025-05-23 00:41:48.906379 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:41:48.906919 | orchestrator | Friday 23 May 2025 00:41:48 +0000 (0:00:00.237) 0:00:07.243 ************ 2025-05-23 00:41:49.116417 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:41:49.116574 | orchestrator | 2025-05-23 00:41:49.119062 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:41:49.119774 | orchestrator | Friday 23 May 2025 00:41:49 +0000 (0:00:00.210) 0:00:07.453 ************ 2025-05-23 00:41:49.324376 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:41:49.324575 | orchestrator | 2025-05-23 00:41:49.325535 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:41:49.325945 | orchestrator | Friday 23 May 2025 00:41:49 +0000 (0:00:00.206) 0:00:07.659 ************ 2025-05-23 00:41:49.997804 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-05-23 00:41:50.001537 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-05-23 00:41:50.001592 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-05-23 00:41:50.001625 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-05-23 00:41:50.001915 | orchestrator | 2025-05-23 00:41:50.003249 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:41:50.003861 | orchestrator | Friday 23 May 2025 00:41:49 +0000 (0:00:00.672) 0:00:08.331 ************ 2025-05-23 00:41:50.222736 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:41:50.223503 | orchestrator | 2025-05-23 00:41:50.224384 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:41:50.225083 | orchestrator | Friday 23 May 2025 00:41:50 +0000 (0:00:00.227) 0:00:08.559 ************ 2025-05-23 00:41:50.418950 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:41:50.419584 | orchestrator | 2025-05-23 00:41:50.420454 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:41:50.421181 | orchestrator | Friday 23 May 2025 00:41:50 +0000 (0:00:00.195) 0:00:08.755 ************ 2025-05-23 00:41:50.612174 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:41:50.612330 | orchestrator | 2025-05-23 00:41:50.612899 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:41:50.613487 | orchestrator | Friday 23 May 2025 00:41:50 +0000 (0:00:00.192) 0:00:08.948 ************ 2025-05-23 00:41:50.830963 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:41:50.831062 | orchestrator | 2025-05-23 00:41:50.831893 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-23 00:41:50.832280 | orchestrator | Friday 23 May 2025 00:41:50 +0000 (0:00:00.217) 0:00:09.166 ************ 2025-05-23 00:41:50.968835 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:41:50.969790 | orchestrator | 2025-05-23 00:41:50.970599 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-23 00:41:50.971493 | orchestrator | Friday 23 May 2025 00:41:50 +0000 (0:00:00.139) 0:00:09.305 ************ 2025-05-23 00:41:51.174634 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '17b95678-9240-5166-938b-e89fe6559568'}}) 2025-05-23 00:41:51.175177 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0'}}) 2025-05-23 00:41:51.176607 | orchestrator | 2025-05-23 00:41:51.177220 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-23 00:41:51.177955 | orchestrator | Friday 23 May 2025 00:41:51 +0000 (0:00:00.205) 0:00:09.511 ************ 2025-05-23 00:41:53.316052 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-17b95678-9240-5166-938b-e89fe6559568', 'data_vg': 'ceph-17b95678-9240-5166-938b-e89fe6559568'}) 2025-05-23 00:41:53.316790 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0', 'data_vg': 'ceph-8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0'}) 2025-05-23 00:41:53.318602 | orchestrator | 2025-05-23 00:41:53.319376 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-23 00:41:53.319848 | orchestrator | Friday 23 May 2025 00:41:53 +0000 (0:00:02.138) 0:00:11.650 ************ 2025-05-23 00:41:53.497152 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-17b95678-9240-5166-938b-e89fe6559568', 'data_vg': 'ceph-17b95678-9240-5166-938b-e89fe6559568'})  2025-05-23 00:41:53.498125 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0', 'data_vg': 'ceph-8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0'})  2025-05-23 00:41:53.498796 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:41:53.500573 | orchestrator | 2025-05-23 00:41:53.500629 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-23 00:41:53.500717 | orchestrator | Friday 23 May 2025 00:41:53 +0000 (0:00:00.183) 0:00:11.833 ************ 2025-05-23 00:41:54.961956 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-17b95678-9240-5166-938b-e89fe6559568', 'data_vg': 'ceph-17b95678-9240-5166-938b-e89fe6559568'}) 2025-05-23 00:41:54.962070 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0', 'data_vg': 'ceph-8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0'}) 2025-05-23 00:41:54.962116 | orchestrator | 2025-05-23 00:41:54.962811 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-23 00:41:54.962833 | orchestrator | Friday 23 May 2025 00:41:54 +0000 (0:00:01.464) 0:00:13.298 ************ 2025-05-23 00:41:55.124318 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-17b95678-9240-5166-938b-e89fe6559568', 'data_vg': 'ceph-17b95678-9240-5166-938b-e89fe6559568'})  2025-05-23 00:41:55.124470 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0', 'data_vg': 'ceph-8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0'})  2025-05-23 00:41:55.124806 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:41:55.125661 | orchestrator | 2025-05-23 00:41:55.125832 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-23 00:41:55.126508 | orchestrator | Friday 23 May 2025 00:41:55 +0000 (0:00:00.163) 0:00:13.462 ************ 2025-05-23 00:41:55.267070 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:41:55.267170 | orchestrator | 2025-05-23 00:41:55.267924 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-23 00:41:55.268252 | orchestrator | Friday 23 May 2025 00:41:55 +0000 (0:00:00.141) 0:00:13.603 ************ 2025-05-23 00:41:55.429061 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-17b95678-9240-5166-938b-e89fe6559568', 'data_vg': 'ceph-17b95678-9240-5166-938b-e89fe6559568'})  2025-05-23 00:41:55.429418 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0', 'data_vg': 'ceph-8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0'})  2025-05-23 00:41:55.430583 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:41:55.431621 | orchestrator | 2025-05-23 00:41:55.432549 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-23 00:41:55.432886 | orchestrator | Friday 23 May 2025 00:41:55 +0000 (0:00:00.160) 0:00:13.764 ************ 2025-05-23 00:41:55.570124 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:41:55.570409 | orchestrator | 2025-05-23 00:41:55.570898 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-23 00:41:55.571536 | orchestrator | Friday 23 May 2025 00:41:55 +0000 (0:00:00.141) 0:00:13.905 ************ 2025-05-23 00:41:55.727967 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-17b95678-9240-5166-938b-e89fe6559568', 'data_vg': 'ceph-17b95678-9240-5166-938b-e89fe6559568'})  2025-05-23 00:41:55.728069 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0', 'data_vg': 'ceph-8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0'})  2025-05-23 00:41:55.730390 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:41:55.730494 | orchestrator | 2025-05-23 00:41:55.731568 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-23 00:41:55.731594 | orchestrator | Friday 23 May 2025 00:41:55 +0000 (0:00:00.158) 0:00:14.064 ************ 2025-05-23 00:41:55.859215 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:41:55.859306 | orchestrator | 2025-05-23 00:41:55.859385 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-23 00:41:55.861430 | orchestrator | Friday 23 May 2025 00:41:55 +0000 (0:00:00.130) 0:00:14.194 ************ 2025-05-23 00:41:56.159178 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-17b95678-9240-5166-938b-e89fe6559568', 'data_vg': 'ceph-17b95678-9240-5166-938b-e89fe6559568'})  2025-05-23 00:41:56.159303 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0', 'data_vg': 'ceph-8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0'})  2025-05-23 00:41:56.160264 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:41:56.162279 | orchestrator | 2025-05-23 00:41:56.162352 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-23 00:41:56.162555 | orchestrator | Friday 23 May 2025 00:41:56 +0000 (0:00:00.300) 0:00:14.495 ************ 2025-05-23 00:41:56.302305 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:41:56.302470 | orchestrator | 2025-05-23 00:41:56.303202 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-23 00:41:56.304634 | orchestrator | Friday 23 May 2025 00:41:56 +0000 (0:00:00.142) 0:00:14.638 ************ 2025-05-23 00:41:56.475951 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-17b95678-9240-5166-938b-e89fe6559568', 'data_vg': 'ceph-17b95678-9240-5166-938b-e89fe6559568'})  2025-05-23 00:41:56.476056 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0', 'data_vg': 'ceph-8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0'})  2025-05-23 00:41:56.476475 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:41:56.477886 | orchestrator | 2025-05-23 00:41:56.478681 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-23 00:41:56.481666 | orchestrator | Friday 23 May 2025 00:41:56 +0000 (0:00:00.173) 0:00:14.811 ************ 2025-05-23 00:41:56.672731 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-17b95678-9240-5166-938b-e89fe6559568', 'data_vg': 'ceph-17b95678-9240-5166-938b-e89fe6559568'})  2025-05-23 00:41:56.672967 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0', 'data_vg': 'ceph-8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0'})  2025-05-23 00:41:56.673475 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:41:56.673800 | orchestrator | 2025-05-23 00:41:56.674384 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-23 00:41:56.676894 | orchestrator | Friday 23 May 2025 00:41:56 +0000 (0:00:00.197) 0:00:15.009 ************ 2025-05-23 00:41:56.866572 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-17b95678-9240-5166-938b-e89fe6559568', 'data_vg': 'ceph-17b95678-9240-5166-938b-e89fe6559568'})  2025-05-23 00:41:56.867037 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0', 'data_vg': 'ceph-8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0'})  2025-05-23 00:41:56.867548 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:41:56.868593 | orchestrator | 2025-05-23 00:41:56.868940 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-23 00:41:56.869668 | orchestrator | Friday 23 May 2025 00:41:56 +0000 (0:00:00.194) 0:00:15.203 ************ 2025-05-23 00:41:57.013742 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:41:57.013863 | orchestrator | 2025-05-23 00:41:57.013887 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-23 00:41:57.013910 | orchestrator | Friday 23 May 2025 00:41:57 +0000 (0:00:00.143) 0:00:15.347 ************ 2025-05-23 00:41:57.157129 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:41:57.157762 | orchestrator | 2025-05-23 00:41:57.158413 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-23 00:41:57.159078 | orchestrator | Friday 23 May 2025 00:41:57 +0000 (0:00:00.146) 0:00:15.494 ************ 2025-05-23 00:41:57.309158 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:41:57.309272 | orchestrator | 2025-05-23 00:41:57.310504 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-23 00:41:57.311367 | orchestrator | Friday 23 May 2025 00:41:57 +0000 (0:00:00.151) 0:00:15.645 ************ 2025-05-23 00:41:57.459012 | orchestrator | ok: [testbed-node-3] => { 2025-05-23 00:41:57.459128 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-23 00:41:57.459955 | orchestrator | } 2025-05-23 00:41:57.462614 | orchestrator | 2025-05-23 00:41:57.462749 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-23 00:41:57.462891 | orchestrator | Friday 23 May 2025 00:41:57 +0000 (0:00:00.147) 0:00:15.793 ************ 2025-05-23 00:41:57.603149 | orchestrator | ok: [testbed-node-3] => { 2025-05-23 00:41:57.603246 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-23 00:41:57.603345 | orchestrator | } 2025-05-23 00:41:57.604508 | orchestrator | 2025-05-23 00:41:57.604763 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-23 00:41:57.605361 | orchestrator | Friday 23 May 2025 00:41:57 +0000 (0:00:00.146) 0:00:15.939 ************ 2025-05-23 00:41:57.758312 | orchestrator | ok: [testbed-node-3] => { 2025-05-23 00:41:57.758525 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-23 00:41:57.759894 | orchestrator | } 2025-05-23 00:41:57.760939 | orchestrator | 2025-05-23 00:41:57.761542 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-23 00:41:57.764960 | orchestrator | Friday 23 May 2025 00:41:57 +0000 (0:00:00.155) 0:00:16.094 ************ 2025-05-23 00:41:58.865458 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:41:58.865636 | orchestrator | 2025-05-23 00:41:58.866521 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-23 00:41:58.867494 | orchestrator | Friday 23 May 2025 00:41:58 +0000 (0:00:01.105) 0:00:17.200 ************ 2025-05-23 00:41:59.376062 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:41:59.376252 | orchestrator | 2025-05-23 00:41:59.377012 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-23 00:41:59.377780 | orchestrator | Friday 23 May 2025 00:41:59 +0000 (0:00:00.512) 0:00:17.712 ************ 2025-05-23 00:41:59.865472 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:41:59.865757 | orchestrator | 2025-05-23 00:41:59.865788 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-23 00:41:59.866139 | orchestrator | Friday 23 May 2025 00:41:59 +0000 (0:00:00.489) 0:00:18.202 ************ 2025-05-23 00:42:00.012333 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:42:00.012959 | orchestrator | 2025-05-23 00:42:00.013250 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-23 00:42:00.013889 | orchestrator | Friday 23 May 2025 00:42:00 +0000 (0:00:00.147) 0:00:18.349 ************ 2025-05-23 00:42:00.139284 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:42:00.139562 | orchestrator | 2025-05-23 00:42:00.140059 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-23 00:42:00.141172 | orchestrator | Friday 23 May 2025 00:42:00 +0000 (0:00:00.125) 0:00:18.475 ************ 2025-05-23 00:42:00.249596 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:42:00.252042 | orchestrator | 2025-05-23 00:42:00.254223 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-23 00:42:00.254781 | orchestrator | Friday 23 May 2025 00:42:00 +0000 (0:00:00.111) 0:00:18.586 ************ 2025-05-23 00:42:00.391576 | orchestrator | ok: [testbed-node-3] => { 2025-05-23 00:42:00.391703 | orchestrator |  "vgs_report": { 2025-05-23 00:42:00.391970 | orchestrator |  "vg": [] 2025-05-23 00:42:00.392346 | orchestrator |  } 2025-05-23 00:42:00.392774 | orchestrator | } 2025-05-23 00:42:00.395543 | orchestrator | 2025-05-23 00:42:00.395599 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-23 00:42:00.395613 | orchestrator | Friday 23 May 2025 00:42:00 +0000 (0:00:00.141) 0:00:18.727 ************ 2025-05-23 00:42:00.553222 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:42:00.553480 | orchestrator | 2025-05-23 00:42:00.554545 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-23 00:42:00.554960 | orchestrator | Friday 23 May 2025 00:42:00 +0000 (0:00:00.161) 0:00:18.889 ************ 2025-05-23 00:42:00.693310 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:42:00.693577 | orchestrator | 2025-05-23 00:42:00.694201 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-23 00:42:00.695577 | orchestrator | Friday 23 May 2025 00:42:00 +0000 (0:00:00.140) 0:00:19.030 ************ 2025-05-23 00:42:00.831331 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:42:00.831597 | orchestrator | 2025-05-23 00:42:00.832051 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-23 00:42:00.832706 | orchestrator | Friday 23 May 2025 00:42:00 +0000 (0:00:00.138) 0:00:19.168 ************ 2025-05-23 00:42:00.962238 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:42:00.962447 | orchestrator | 2025-05-23 00:42:00.963231 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-23 00:42:00.963746 | orchestrator | Friday 23 May 2025 00:42:00 +0000 (0:00:00.130) 0:00:19.299 ************ 2025-05-23 00:42:01.258956 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:42:01.259078 | orchestrator | 2025-05-23 00:42:01.259096 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-23 00:42:01.259186 | orchestrator | Friday 23 May 2025 00:42:01 +0000 (0:00:00.294) 0:00:19.593 ************ 2025-05-23 00:42:01.402957 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:42:01.403109 | orchestrator | 2025-05-23 00:42:01.403222 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-23 00:42:01.403972 | orchestrator | Friday 23 May 2025 00:42:01 +0000 (0:00:00.145) 0:00:19.739 ************ 2025-05-23 00:42:01.546096 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:42:01.546263 | orchestrator | 2025-05-23 00:42:01.548479 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-23 00:42:01.548509 | orchestrator | Friday 23 May 2025 00:42:01 +0000 (0:00:00.142) 0:00:19.881 ************ 2025-05-23 00:42:01.684979 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:42:01.686434 | orchestrator | 2025-05-23 00:42:01.688686 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-23 00:42:01.688730 | orchestrator | Friday 23 May 2025 00:42:01 +0000 (0:00:00.139) 0:00:20.020 ************ 2025-05-23 00:42:01.820153 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:42:01.821252 | orchestrator | 2025-05-23 00:42:01.822507 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-23 00:42:01.824961 | orchestrator | Friday 23 May 2025 00:42:01 +0000 (0:00:00.136) 0:00:20.157 ************ 2025-05-23 00:42:01.957163 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:42:01.957282 | orchestrator | 2025-05-23 00:42:01.957382 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-23 00:42:01.957985 | orchestrator | Friday 23 May 2025 00:42:01 +0000 (0:00:00.136) 0:00:20.293 ************ 2025-05-23 00:42:02.096070 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:42:02.096438 | orchestrator | 2025-05-23 00:42:02.097558 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-23 00:42:02.098103 | orchestrator | Friday 23 May 2025 00:42:02 +0000 (0:00:00.138) 0:00:20.432 ************ 2025-05-23 00:42:02.237716 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:42:02.238333 | orchestrator | 2025-05-23 00:42:02.239206 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-23 00:42:02.239768 | orchestrator | Friday 23 May 2025 00:42:02 +0000 (0:00:00.142) 0:00:20.574 ************ 2025-05-23 00:42:02.382747 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:42:02.382930 | orchestrator | 2025-05-23 00:42:02.383492 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-23 00:42:02.383979 | orchestrator | Friday 23 May 2025 00:42:02 +0000 (0:00:00.145) 0:00:20.719 ************ 2025-05-23 00:42:02.528882 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:42:02.529152 | orchestrator | 2025-05-23 00:42:02.529707 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-23 00:42:02.530330 | orchestrator | Friday 23 May 2025 00:42:02 +0000 (0:00:00.145) 0:00:20.865 ************ 2025-05-23 00:42:02.707027 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-17b95678-9240-5166-938b-e89fe6559568', 'data_vg': 'ceph-17b95678-9240-5166-938b-e89fe6559568'})  2025-05-23 00:42:02.707440 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0', 'data_vg': 'ceph-8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0'})  2025-05-23 00:42:02.708373 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:42:02.708909 | orchestrator | 2025-05-23 00:42:02.709830 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-23 00:42:02.710069 | orchestrator | Friday 23 May 2025 00:42:02 +0000 (0:00:00.177) 0:00:21.043 ************ 2025-05-23 00:42:02.871906 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-17b95678-9240-5166-938b-e89fe6559568', 'data_vg': 'ceph-17b95678-9240-5166-938b-e89fe6559568'})  2025-05-23 00:42:02.872705 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0', 'data_vg': 'ceph-8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0'})  2025-05-23 00:42:02.873316 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:42:02.873802 | orchestrator | 2025-05-23 00:42:02.874544 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-23 00:42:02.874802 | orchestrator | Friday 23 May 2025 00:42:02 +0000 (0:00:00.166) 0:00:21.209 ************ 2025-05-23 00:42:03.223513 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-17b95678-9240-5166-938b-e89fe6559568', 'data_vg': 'ceph-17b95678-9240-5166-938b-e89fe6559568'})  2025-05-23 00:42:03.223611 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0', 'data_vg': 'ceph-8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0'})  2025-05-23 00:42:03.223770 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:42:03.223791 | orchestrator | 2025-05-23 00:42:03.224909 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-23 00:42:03.225115 | orchestrator | Friday 23 May 2025 00:42:03 +0000 (0:00:00.350) 0:00:21.559 ************ 2025-05-23 00:42:03.384607 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-17b95678-9240-5166-938b-e89fe6559568', 'data_vg': 'ceph-17b95678-9240-5166-938b-e89fe6559568'})  2025-05-23 00:42:03.384864 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0', 'data_vg': 'ceph-8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0'})  2025-05-23 00:42:03.385340 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:42:03.385941 | orchestrator | 2025-05-23 00:42:03.386846 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-23 00:42:03.387162 | orchestrator | Friday 23 May 2025 00:42:03 +0000 (0:00:00.160) 0:00:21.720 ************ 2025-05-23 00:42:03.569583 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-17b95678-9240-5166-938b-e89fe6559568', 'data_vg': 'ceph-17b95678-9240-5166-938b-e89fe6559568'})  2025-05-23 00:42:03.571141 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0', 'data_vg': 'ceph-8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0'})  2025-05-23 00:42:03.571996 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:42:03.573195 | orchestrator | 2025-05-23 00:42:03.574493 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-23 00:42:03.575421 | orchestrator | Friday 23 May 2025 00:42:03 +0000 (0:00:00.184) 0:00:21.905 ************ 2025-05-23 00:42:03.749912 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-17b95678-9240-5166-938b-e89fe6559568', 'data_vg': 'ceph-17b95678-9240-5166-938b-e89fe6559568'})  2025-05-23 00:42:03.750114 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0', 'data_vg': 'ceph-8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0'})  2025-05-23 00:42:03.751979 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:42:03.753351 | orchestrator | 2025-05-23 00:42:03.754853 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-23 00:42:03.755203 | orchestrator | Friday 23 May 2025 00:42:03 +0000 (0:00:00.180) 0:00:22.085 ************ 2025-05-23 00:42:03.914211 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-17b95678-9240-5166-938b-e89fe6559568', 'data_vg': 'ceph-17b95678-9240-5166-938b-e89fe6559568'})  2025-05-23 00:42:03.916566 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0', 'data_vg': 'ceph-8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0'})  2025-05-23 00:42:03.916602 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:42:03.917532 | orchestrator | 2025-05-23 00:42:03.918415 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-23 00:42:03.920179 | orchestrator | Friday 23 May 2025 00:42:03 +0000 (0:00:00.163) 0:00:22.249 ************ 2025-05-23 00:42:04.085287 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-17b95678-9240-5166-938b-e89fe6559568', 'data_vg': 'ceph-17b95678-9240-5166-938b-e89fe6559568'})  2025-05-23 00:42:04.087221 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0', 'data_vg': 'ceph-8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0'})  2025-05-23 00:42:04.088178 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:42:04.091282 | orchestrator | 2025-05-23 00:42:04.091309 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-23 00:42:04.091324 | orchestrator | Friday 23 May 2025 00:42:04 +0000 (0:00:00.171) 0:00:22.421 ************ 2025-05-23 00:42:04.592331 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:42:04.592437 | orchestrator | 2025-05-23 00:42:04.592562 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-23 00:42:04.594105 | orchestrator | Friday 23 May 2025 00:42:04 +0000 (0:00:00.505) 0:00:22.927 ************ 2025-05-23 00:42:05.106167 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:42:05.106499 | orchestrator | 2025-05-23 00:42:05.107305 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-23 00:42:05.108779 | orchestrator | Friday 23 May 2025 00:42:05 +0000 (0:00:00.513) 0:00:23.441 ************ 2025-05-23 00:42:05.256826 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:42:05.257094 | orchestrator | 2025-05-23 00:42:05.258885 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-23 00:42:05.259376 | orchestrator | Friday 23 May 2025 00:42:05 +0000 (0:00:00.152) 0:00:23.594 ************ 2025-05-23 00:42:05.435771 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-17b95678-9240-5166-938b-e89fe6559568', 'vg_name': 'ceph-17b95678-9240-5166-938b-e89fe6559568'}) 2025-05-23 00:42:05.435890 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0', 'vg_name': 'ceph-8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0'}) 2025-05-23 00:42:05.436217 | orchestrator | 2025-05-23 00:42:05.436780 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-23 00:42:05.437030 | orchestrator | Friday 23 May 2025 00:42:05 +0000 (0:00:00.178) 0:00:23.772 ************ 2025-05-23 00:42:05.777260 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-17b95678-9240-5166-938b-e89fe6559568', 'data_vg': 'ceph-17b95678-9240-5166-938b-e89fe6559568'})  2025-05-23 00:42:05.778894 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0', 'data_vg': 'ceph-8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0'})  2025-05-23 00:42:05.778933 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:42:05.779249 | orchestrator | 2025-05-23 00:42:05.780056 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-23 00:42:05.780773 | orchestrator | Friday 23 May 2025 00:42:05 +0000 (0:00:00.337) 0:00:24.110 ************ 2025-05-23 00:42:05.958146 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-17b95678-9240-5166-938b-e89fe6559568', 'data_vg': 'ceph-17b95678-9240-5166-938b-e89fe6559568'})  2025-05-23 00:42:05.958313 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0', 'data_vg': 'ceph-8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0'})  2025-05-23 00:42:05.958746 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:42:05.959866 | orchestrator | 2025-05-23 00:42:05.960500 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-23 00:42:05.961211 | orchestrator | Friday 23 May 2025 00:42:05 +0000 (0:00:00.184) 0:00:24.294 ************ 2025-05-23 00:42:06.144643 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-17b95678-9240-5166-938b-e89fe6559568', 'data_vg': 'ceph-17b95678-9240-5166-938b-e89fe6559568'})  2025-05-23 00:42:06.145119 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0', 'data_vg': 'ceph-8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0'})  2025-05-23 00:42:06.145712 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:42:06.146681 | orchestrator | 2025-05-23 00:42:06.147119 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-23 00:42:06.147737 | orchestrator | Friday 23 May 2025 00:42:06 +0000 (0:00:00.186) 0:00:24.481 ************ 2025-05-23 00:42:06.829574 | orchestrator | ok: [testbed-node-3] => { 2025-05-23 00:42:06.830887 | orchestrator |  "lvm_report": { 2025-05-23 00:42:06.831529 | orchestrator |  "lv": [ 2025-05-23 00:42:06.832828 | orchestrator |  { 2025-05-23 00:42:06.833310 | orchestrator |  "lv_name": "osd-block-17b95678-9240-5166-938b-e89fe6559568", 2025-05-23 00:42:06.834626 | orchestrator |  "vg_name": "ceph-17b95678-9240-5166-938b-e89fe6559568" 2025-05-23 00:42:06.835727 | orchestrator |  }, 2025-05-23 00:42:06.836577 | orchestrator |  { 2025-05-23 00:42:06.837118 | orchestrator |  "lv_name": "osd-block-8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0", 2025-05-23 00:42:06.837684 | orchestrator |  "vg_name": "ceph-8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0" 2025-05-23 00:42:06.838268 | orchestrator |  } 2025-05-23 00:42:06.839295 | orchestrator |  ], 2025-05-23 00:42:06.839827 | orchestrator |  "pv": [ 2025-05-23 00:42:06.840305 | orchestrator |  { 2025-05-23 00:42:06.840928 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-23 00:42:06.842126 | orchestrator |  "vg_name": "ceph-17b95678-9240-5166-938b-e89fe6559568" 2025-05-23 00:42:06.842673 | orchestrator |  }, 2025-05-23 00:42:06.843187 | orchestrator |  { 2025-05-23 00:42:06.843431 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-23 00:42:06.843808 | orchestrator |  "vg_name": "ceph-8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0" 2025-05-23 00:42:06.844273 | orchestrator |  } 2025-05-23 00:42:06.844722 | orchestrator |  ] 2025-05-23 00:42:06.845050 | orchestrator |  } 2025-05-23 00:42:06.845532 | orchestrator | } 2025-05-23 00:42:06.845910 | orchestrator | 2025-05-23 00:42:06.847147 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-23 00:42:06.847270 | orchestrator | 2025-05-23 00:42:06.847412 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-23 00:42:06.847691 | orchestrator | Friday 23 May 2025 00:42:06 +0000 (0:00:00.684) 0:00:25.166 ************ 2025-05-23 00:42:07.403456 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-23 00:42:07.406120 | orchestrator | 2025-05-23 00:42:07.406719 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-23 00:42:07.409089 | orchestrator | Friday 23 May 2025 00:42:07 +0000 (0:00:00.574) 0:00:25.740 ************ 2025-05-23 00:42:07.665332 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:42:07.665570 | orchestrator | 2025-05-23 00:42:07.665861 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:42:07.666752 | orchestrator | Friday 23 May 2025 00:42:07 +0000 (0:00:00.261) 0:00:26.002 ************ 2025-05-23 00:42:08.129971 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-05-23 00:42:08.130268 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-05-23 00:42:08.131648 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-05-23 00:42:08.132125 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-05-23 00:42:08.132758 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-05-23 00:42:08.133972 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-05-23 00:42:08.134347 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-05-23 00:42:08.137492 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-05-23 00:42:08.137522 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-05-23 00:42:08.137534 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-05-23 00:42:08.137546 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-05-23 00:42:08.137557 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-05-23 00:42:08.137644 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-05-23 00:42:08.138650 | orchestrator | 2025-05-23 00:42:08.139579 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:42:08.140128 | orchestrator | Friday 23 May 2025 00:42:08 +0000 (0:00:00.465) 0:00:26.467 ************ 2025-05-23 00:42:08.337996 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:08.338549 | orchestrator | 2025-05-23 00:42:08.339065 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:42:08.340065 | orchestrator | Friday 23 May 2025 00:42:08 +0000 (0:00:00.208) 0:00:26.675 ************ 2025-05-23 00:42:08.532167 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:08.532358 | orchestrator | 2025-05-23 00:42:08.532707 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:42:08.533204 | orchestrator | Friday 23 May 2025 00:42:08 +0000 (0:00:00.194) 0:00:26.869 ************ 2025-05-23 00:42:08.750347 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:08.753138 | orchestrator | 2025-05-23 00:42:08.754708 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:42:08.755112 | orchestrator | Friday 23 May 2025 00:42:08 +0000 (0:00:00.216) 0:00:27.085 ************ 2025-05-23 00:42:08.952166 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:08.952997 | orchestrator | 2025-05-23 00:42:08.953549 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:42:08.954734 | orchestrator | Friday 23 May 2025 00:42:08 +0000 (0:00:00.203) 0:00:27.289 ************ 2025-05-23 00:42:09.154760 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:09.155053 | orchestrator | 2025-05-23 00:42:09.155757 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:42:09.156220 | orchestrator | Friday 23 May 2025 00:42:09 +0000 (0:00:00.202) 0:00:27.491 ************ 2025-05-23 00:42:09.335093 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:09.335185 | orchestrator | 2025-05-23 00:42:09.336009 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:42:09.337003 | orchestrator | Friday 23 May 2025 00:42:09 +0000 (0:00:00.179) 0:00:27.671 ************ 2025-05-23 00:42:09.681934 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:09.682366 | orchestrator | 2025-05-23 00:42:09.682729 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:42:09.683707 | orchestrator | Friday 23 May 2025 00:42:09 +0000 (0:00:00.345) 0:00:28.017 ************ 2025-05-23 00:42:09.893316 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:09.893758 | orchestrator | 2025-05-23 00:42:09.894652 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:42:09.895500 | orchestrator | Friday 23 May 2025 00:42:09 +0000 (0:00:00.212) 0:00:28.230 ************ 2025-05-23 00:42:10.312954 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ab21c0a7-19ba-47fa-9bfa-a97fbae45af4) 2025-05-23 00:42:10.313715 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ab21c0a7-19ba-47fa-9bfa-a97fbae45af4) 2025-05-23 00:42:10.315523 | orchestrator | 2025-05-23 00:42:10.315802 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:42:10.316511 | orchestrator | Friday 23 May 2025 00:42:10 +0000 (0:00:00.419) 0:00:28.649 ************ 2025-05-23 00:42:10.766346 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2fc59eae-0e0c-4c3b-84f8-905b4655c6b7) 2025-05-23 00:42:10.766548 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2fc59eae-0e0c-4c3b-84f8-905b4655c6b7) 2025-05-23 00:42:10.766939 | orchestrator | 2025-05-23 00:42:10.767437 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:42:10.767962 | orchestrator | Friday 23 May 2025 00:42:10 +0000 (0:00:00.452) 0:00:29.101 ************ 2025-05-23 00:42:11.224000 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2ac02f21-3ef0-4f70-9ec3-b7448efc3652) 2025-05-23 00:42:11.225206 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2ac02f21-3ef0-4f70-9ec3-b7448efc3652) 2025-05-23 00:42:11.226228 | orchestrator | 2025-05-23 00:42:11.227456 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:42:11.228081 | orchestrator | Friday 23 May 2025 00:42:11 +0000 (0:00:00.458) 0:00:29.560 ************ 2025-05-23 00:42:11.710563 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_29f848a2-d495-4783-815a-7e69d4da9d2d) 2025-05-23 00:42:11.711518 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_29f848a2-d495-4783-815a-7e69d4da9d2d) 2025-05-23 00:42:11.712267 | orchestrator | 2025-05-23 00:42:11.712759 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:42:11.713055 | orchestrator | Friday 23 May 2025 00:42:11 +0000 (0:00:00.478) 0:00:30.039 ************ 2025-05-23 00:42:12.038599 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-23 00:42:12.040046 | orchestrator | 2025-05-23 00:42:12.041392 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:42:12.042492 | orchestrator | Friday 23 May 2025 00:42:12 +0000 (0:00:00.335) 0:00:30.375 ************ 2025-05-23 00:42:12.534526 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-05-23 00:42:12.535265 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-05-23 00:42:12.536989 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-05-23 00:42:12.538303 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-05-23 00:42:12.539412 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-05-23 00:42:12.540555 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-05-23 00:42:12.541158 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-05-23 00:42:12.542786 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-05-23 00:42:12.543098 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-05-23 00:42:12.544005 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-05-23 00:42:12.544795 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-05-23 00:42:12.545681 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-05-23 00:42:12.546092 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-05-23 00:42:12.546646 | orchestrator | 2025-05-23 00:42:12.547305 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:42:12.547974 | orchestrator | Friday 23 May 2025 00:42:12 +0000 (0:00:00.495) 0:00:30.870 ************ 2025-05-23 00:42:12.729813 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:12.731270 | orchestrator | 2025-05-23 00:42:12.732352 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:42:12.732829 | orchestrator | Friday 23 May 2025 00:42:12 +0000 (0:00:00.194) 0:00:31.065 ************ 2025-05-23 00:42:12.944097 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:12.944610 | orchestrator | 2025-05-23 00:42:12.946110 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:42:12.947387 | orchestrator | Friday 23 May 2025 00:42:12 +0000 (0:00:00.215) 0:00:31.281 ************ 2025-05-23 00:42:13.391256 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:13.391709 | orchestrator | 2025-05-23 00:42:13.392706 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:42:13.393697 | orchestrator | Friday 23 May 2025 00:42:13 +0000 (0:00:00.447) 0:00:31.728 ************ 2025-05-23 00:42:13.608468 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:13.608574 | orchestrator | 2025-05-23 00:42:13.609308 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:42:13.610173 | orchestrator | Friday 23 May 2025 00:42:13 +0000 (0:00:00.216) 0:00:31.944 ************ 2025-05-23 00:42:13.822216 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:13.823301 | orchestrator | 2025-05-23 00:42:13.825464 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:42:13.825496 | orchestrator | Friday 23 May 2025 00:42:13 +0000 (0:00:00.212) 0:00:32.157 ************ 2025-05-23 00:42:14.030291 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:14.030397 | orchestrator | 2025-05-23 00:42:14.031531 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:42:14.032132 | orchestrator | Friday 23 May 2025 00:42:14 +0000 (0:00:00.207) 0:00:32.364 ************ 2025-05-23 00:42:14.241975 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:14.243892 | orchestrator | 2025-05-23 00:42:14.244748 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:42:14.245800 | orchestrator | Friday 23 May 2025 00:42:14 +0000 (0:00:00.212) 0:00:32.577 ************ 2025-05-23 00:42:14.481908 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:14.482371 | orchestrator | 2025-05-23 00:42:14.483452 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:42:14.485535 | orchestrator | Friday 23 May 2025 00:42:14 +0000 (0:00:00.241) 0:00:32.818 ************ 2025-05-23 00:42:15.198108 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-05-23 00:42:15.198315 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-05-23 00:42:15.199442 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-05-23 00:42:15.200705 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-05-23 00:42:15.201573 | orchestrator | 2025-05-23 00:42:15.205276 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:42:15.205302 | orchestrator | Friday 23 May 2025 00:42:15 +0000 (0:00:00.715) 0:00:33.534 ************ 2025-05-23 00:42:15.423982 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:15.424078 | orchestrator | 2025-05-23 00:42:15.424718 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:42:15.425223 | orchestrator | Friday 23 May 2025 00:42:15 +0000 (0:00:00.226) 0:00:33.761 ************ 2025-05-23 00:42:15.631376 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:15.631983 | orchestrator | 2025-05-23 00:42:15.633158 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:42:15.633833 | orchestrator | Friday 23 May 2025 00:42:15 +0000 (0:00:00.207) 0:00:33.968 ************ 2025-05-23 00:42:15.834180 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:15.834371 | orchestrator | 2025-05-23 00:42:15.837073 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:42:15.837098 | orchestrator | Friday 23 May 2025 00:42:15 +0000 (0:00:00.199) 0:00:34.168 ************ 2025-05-23 00:42:16.464015 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:16.464321 | orchestrator | 2025-05-23 00:42:16.465263 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-23 00:42:16.466436 | orchestrator | Friday 23 May 2025 00:42:16 +0000 (0:00:00.631) 0:00:34.800 ************ 2025-05-23 00:42:16.607928 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:16.608153 | orchestrator | 2025-05-23 00:42:16.609784 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-23 00:42:16.611396 | orchestrator | Friday 23 May 2025 00:42:16 +0000 (0:00:00.143) 0:00:34.944 ************ 2025-05-23 00:42:16.814537 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '125adf16-eac9-5ada-96e7-bcd4f30a545d'}}) 2025-05-23 00:42:16.815521 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8bf3a31b-2d76-5988-bbd2-6800630d4c9a'}}) 2025-05-23 00:42:16.816923 | orchestrator | 2025-05-23 00:42:16.817709 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-23 00:42:16.818432 | orchestrator | Friday 23 May 2025 00:42:16 +0000 (0:00:00.205) 0:00:35.150 ************ 2025-05-23 00:42:18.608430 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-125adf16-eac9-5ada-96e7-bcd4f30a545d', 'data_vg': 'ceph-125adf16-eac9-5ada-96e7-bcd4f30a545d'}) 2025-05-23 00:42:18.608559 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8bf3a31b-2d76-5988-bbd2-6800630d4c9a', 'data_vg': 'ceph-8bf3a31b-2d76-5988-bbd2-6800630d4c9a'}) 2025-05-23 00:42:18.608752 | orchestrator | 2025-05-23 00:42:18.609446 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-23 00:42:18.611324 | orchestrator | Friday 23 May 2025 00:42:18 +0000 (0:00:01.793) 0:00:36.944 ************ 2025-05-23 00:42:18.802599 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-125adf16-eac9-5ada-96e7-bcd4f30a545d', 'data_vg': 'ceph-125adf16-eac9-5ada-96e7-bcd4f30a545d'})  2025-05-23 00:42:18.802868 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8bf3a31b-2d76-5988-bbd2-6800630d4c9a', 'data_vg': 'ceph-8bf3a31b-2d76-5988-bbd2-6800630d4c9a'})  2025-05-23 00:42:18.803269 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:18.803837 | orchestrator | 2025-05-23 00:42:18.804456 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-23 00:42:18.804959 | orchestrator | Friday 23 May 2025 00:42:18 +0000 (0:00:00.194) 0:00:37.138 ************ 2025-05-23 00:42:20.159308 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-125adf16-eac9-5ada-96e7-bcd4f30a545d', 'data_vg': 'ceph-125adf16-eac9-5ada-96e7-bcd4f30a545d'}) 2025-05-23 00:42:20.159421 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8bf3a31b-2d76-5988-bbd2-6800630d4c9a', 'data_vg': 'ceph-8bf3a31b-2d76-5988-bbd2-6800630d4c9a'}) 2025-05-23 00:42:20.159438 | orchestrator | 2025-05-23 00:42:20.160462 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-23 00:42:20.160506 | orchestrator | Friday 23 May 2025 00:42:20 +0000 (0:00:01.354) 0:00:38.493 ************ 2025-05-23 00:42:20.315510 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-125adf16-eac9-5ada-96e7-bcd4f30a545d', 'data_vg': 'ceph-125adf16-eac9-5ada-96e7-bcd4f30a545d'})  2025-05-23 00:42:20.315807 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8bf3a31b-2d76-5988-bbd2-6800630d4c9a', 'data_vg': 'ceph-8bf3a31b-2d76-5988-bbd2-6800630d4c9a'})  2025-05-23 00:42:20.316455 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:20.316849 | orchestrator | 2025-05-23 00:42:20.317377 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-23 00:42:20.317480 | orchestrator | Friday 23 May 2025 00:42:20 +0000 (0:00:00.159) 0:00:38.653 ************ 2025-05-23 00:42:20.448986 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:20.450137 | orchestrator | 2025-05-23 00:42:20.450879 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-23 00:42:20.452083 | orchestrator | Friday 23 May 2025 00:42:20 +0000 (0:00:00.133) 0:00:38.786 ************ 2025-05-23 00:42:20.611820 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-125adf16-eac9-5ada-96e7-bcd4f30a545d', 'data_vg': 'ceph-125adf16-eac9-5ada-96e7-bcd4f30a545d'})  2025-05-23 00:42:20.611958 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8bf3a31b-2d76-5988-bbd2-6800630d4c9a', 'data_vg': 'ceph-8bf3a31b-2d76-5988-bbd2-6800630d4c9a'})  2025-05-23 00:42:20.613122 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:20.614199 | orchestrator | 2025-05-23 00:42:20.614816 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-23 00:42:20.615286 | orchestrator | Friday 23 May 2025 00:42:20 +0000 (0:00:00.162) 0:00:38.949 ************ 2025-05-23 00:42:20.995858 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:20.996175 | orchestrator | 2025-05-23 00:42:20.996817 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-23 00:42:20.997512 | orchestrator | Friday 23 May 2025 00:42:20 +0000 (0:00:00.383) 0:00:39.332 ************ 2025-05-23 00:42:21.160101 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-125adf16-eac9-5ada-96e7-bcd4f30a545d', 'data_vg': 'ceph-125adf16-eac9-5ada-96e7-bcd4f30a545d'})  2025-05-23 00:42:21.161297 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8bf3a31b-2d76-5988-bbd2-6800630d4c9a', 'data_vg': 'ceph-8bf3a31b-2d76-5988-bbd2-6800630d4c9a'})  2025-05-23 00:42:21.165102 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:21.165134 | orchestrator | 2025-05-23 00:42:21.165149 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-23 00:42:21.165377 | orchestrator | Friday 23 May 2025 00:42:21 +0000 (0:00:00.163) 0:00:39.496 ************ 2025-05-23 00:42:21.312391 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:21.313254 | orchestrator | 2025-05-23 00:42:21.313830 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-23 00:42:21.314651 | orchestrator | Friday 23 May 2025 00:42:21 +0000 (0:00:00.152) 0:00:39.649 ************ 2025-05-23 00:42:21.479370 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-125adf16-eac9-5ada-96e7-bcd4f30a545d', 'data_vg': 'ceph-125adf16-eac9-5ada-96e7-bcd4f30a545d'})  2025-05-23 00:42:21.481524 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8bf3a31b-2d76-5988-bbd2-6800630d4c9a', 'data_vg': 'ceph-8bf3a31b-2d76-5988-bbd2-6800630d4c9a'})  2025-05-23 00:42:21.482930 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:21.483377 | orchestrator | 2025-05-23 00:42:21.483922 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-23 00:42:21.484374 | orchestrator | Friday 23 May 2025 00:42:21 +0000 (0:00:00.165) 0:00:39.814 ************ 2025-05-23 00:42:21.613593 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:42:21.613956 | orchestrator | 2025-05-23 00:42:21.614698 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-23 00:42:21.615469 | orchestrator | Friday 23 May 2025 00:42:21 +0000 (0:00:00.136) 0:00:39.951 ************ 2025-05-23 00:42:21.774259 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-125adf16-eac9-5ada-96e7-bcd4f30a545d', 'data_vg': 'ceph-125adf16-eac9-5ada-96e7-bcd4f30a545d'})  2025-05-23 00:42:21.774516 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8bf3a31b-2d76-5988-bbd2-6800630d4c9a', 'data_vg': 'ceph-8bf3a31b-2d76-5988-bbd2-6800630d4c9a'})  2025-05-23 00:42:21.775188 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:21.775869 | orchestrator | 2025-05-23 00:42:21.776278 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-23 00:42:21.776654 | orchestrator | Friday 23 May 2025 00:42:21 +0000 (0:00:00.161) 0:00:40.112 ************ 2025-05-23 00:42:21.971360 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-125adf16-eac9-5ada-96e7-bcd4f30a545d', 'data_vg': 'ceph-125adf16-eac9-5ada-96e7-bcd4f30a545d'})  2025-05-23 00:42:21.971498 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8bf3a31b-2d76-5988-bbd2-6800630d4c9a', 'data_vg': 'ceph-8bf3a31b-2d76-5988-bbd2-6800630d4c9a'})  2025-05-23 00:42:21.972034 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:21.972490 | orchestrator | 2025-05-23 00:42:21.973309 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-23 00:42:21.973428 | orchestrator | Friday 23 May 2025 00:42:21 +0000 (0:00:00.192) 0:00:40.305 ************ 2025-05-23 00:42:22.138945 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-125adf16-eac9-5ada-96e7-bcd4f30a545d', 'data_vg': 'ceph-125adf16-eac9-5ada-96e7-bcd4f30a545d'})  2025-05-23 00:42:22.139310 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8bf3a31b-2d76-5988-bbd2-6800630d4c9a', 'data_vg': 'ceph-8bf3a31b-2d76-5988-bbd2-6800630d4c9a'})  2025-05-23 00:42:22.140248 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:22.143403 | orchestrator | 2025-05-23 00:42:22.143432 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-23 00:42:22.143446 | orchestrator | Friday 23 May 2025 00:42:22 +0000 (0:00:00.169) 0:00:40.475 ************ 2025-05-23 00:42:22.266751 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:22.267382 | orchestrator | 2025-05-23 00:42:22.268495 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-23 00:42:22.269348 | orchestrator | Friday 23 May 2025 00:42:22 +0000 (0:00:00.128) 0:00:40.603 ************ 2025-05-23 00:42:22.392751 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:22.393309 | orchestrator | 2025-05-23 00:42:22.393991 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-23 00:42:22.397926 | orchestrator | Friday 23 May 2025 00:42:22 +0000 (0:00:00.125) 0:00:40.729 ************ 2025-05-23 00:42:22.537188 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:22.537280 | orchestrator | 2025-05-23 00:42:22.537804 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-23 00:42:22.538345 | orchestrator | Friday 23 May 2025 00:42:22 +0000 (0:00:00.145) 0:00:40.874 ************ 2025-05-23 00:42:22.859586 | orchestrator | ok: [testbed-node-4] => { 2025-05-23 00:42:22.859794 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-23 00:42:22.859978 | orchestrator | } 2025-05-23 00:42:22.860583 | orchestrator | 2025-05-23 00:42:22.861130 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-23 00:42:22.861528 | orchestrator | Friday 23 May 2025 00:42:22 +0000 (0:00:00.322) 0:00:41.196 ************ 2025-05-23 00:42:23.011389 | orchestrator | ok: [testbed-node-4] => { 2025-05-23 00:42:23.013885 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-23 00:42:23.013939 | orchestrator | } 2025-05-23 00:42:23.013953 | orchestrator | 2025-05-23 00:42:23.014865 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-23 00:42:23.014915 | orchestrator | Friday 23 May 2025 00:42:23 +0000 (0:00:00.149) 0:00:41.346 ************ 2025-05-23 00:42:23.151312 | orchestrator | ok: [testbed-node-4] => { 2025-05-23 00:42:23.152319 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-23 00:42:23.153084 | orchestrator | } 2025-05-23 00:42:23.154661 | orchestrator | 2025-05-23 00:42:23.155492 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-23 00:42:23.157248 | orchestrator | Friday 23 May 2025 00:42:23 +0000 (0:00:00.142) 0:00:41.488 ************ 2025-05-23 00:42:23.668650 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:42:23.668847 | orchestrator | 2025-05-23 00:42:23.668931 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-23 00:42:23.669898 | orchestrator | Friday 23 May 2025 00:42:23 +0000 (0:00:00.512) 0:00:42.001 ************ 2025-05-23 00:42:24.181165 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:42:24.182238 | orchestrator | 2025-05-23 00:42:24.182915 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-23 00:42:24.184335 | orchestrator | Friday 23 May 2025 00:42:24 +0000 (0:00:00.515) 0:00:42.517 ************ 2025-05-23 00:42:24.693940 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:42:24.694152 | orchestrator | 2025-05-23 00:42:24.696532 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-23 00:42:24.697575 | orchestrator | Friday 23 May 2025 00:42:24 +0000 (0:00:00.512) 0:00:43.030 ************ 2025-05-23 00:42:24.843861 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:42:24.845051 | orchestrator | 2025-05-23 00:42:24.845982 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-23 00:42:24.848523 | orchestrator | Friday 23 May 2025 00:42:24 +0000 (0:00:00.150) 0:00:43.180 ************ 2025-05-23 00:42:24.959190 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:24.959294 | orchestrator | 2025-05-23 00:42:24.959310 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-23 00:42:24.959468 | orchestrator | Friday 23 May 2025 00:42:24 +0000 (0:00:00.115) 0:00:43.296 ************ 2025-05-23 00:42:25.083171 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:25.083952 | orchestrator | 2025-05-23 00:42:25.084346 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-23 00:42:25.084885 | orchestrator | Friday 23 May 2025 00:42:25 +0000 (0:00:00.122) 0:00:43.418 ************ 2025-05-23 00:42:25.233352 | orchestrator | ok: [testbed-node-4] => { 2025-05-23 00:42:25.234922 | orchestrator |  "vgs_report": { 2025-05-23 00:42:25.236251 | orchestrator |  "vg": [] 2025-05-23 00:42:25.237240 | orchestrator |  } 2025-05-23 00:42:25.238754 | orchestrator | } 2025-05-23 00:42:25.238982 | orchestrator | 2025-05-23 00:42:25.239903 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-23 00:42:25.241479 | orchestrator | Friday 23 May 2025 00:42:25 +0000 (0:00:00.151) 0:00:43.569 ************ 2025-05-23 00:42:25.408203 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:25.408311 | orchestrator | 2025-05-23 00:42:25.408327 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-23 00:42:25.408341 | orchestrator | Friday 23 May 2025 00:42:25 +0000 (0:00:00.174) 0:00:43.744 ************ 2025-05-23 00:42:25.728382 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:25.733993 | orchestrator | 2025-05-23 00:42:25.736493 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-23 00:42:25.736522 | orchestrator | Friday 23 May 2025 00:42:25 +0000 (0:00:00.317) 0:00:44.061 ************ 2025-05-23 00:42:25.861200 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:25.861503 | orchestrator | 2025-05-23 00:42:25.862115 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-23 00:42:25.862780 | orchestrator | Friday 23 May 2025 00:42:25 +0000 (0:00:00.136) 0:00:44.198 ************ 2025-05-23 00:42:25.997495 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:25.998500 | orchestrator | 2025-05-23 00:42:25.998532 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-23 00:42:25.999096 | orchestrator | Friday 23 May 2025 00:42:25 +0000 (0:00:00.135) 0:00:44.334 ************ 2025-05-23 00:42:26.140093 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:26.140316 | orchestrator | 2025-05-23 00:42:26.141797 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-23 00:42:26.142344 | orchestrator | Friday 23 May 2025 00:42:26 +0000 (0:00:00.142) 0:00:44.476 ************ 2025-05-23 00:42:26.278264 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:26.280955 | orchestrator | 2025-05-23 00:42:26.282092 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-23 00:42:26.282512 | orchestrator | Friday 23 May 2025 00:42:26 +0000 (0:00:00.136) 0:00:44.612 ************ 2025-05-23 00:42:26.417183 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:26.417783 | orchestrator | 2025-05-23 00:42:26.418715 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-23 00:42:26.420504 | orchestrator | Friday 23 May 2025 00:42:26 +0000 (0:00:00.136) 0:00:44.748 ************ 2025-05-23 00:42:26.559016 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:26.559512 | orchestrator | 2025-05-23 00:42:26.560189 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-23 00:42:26.560818 | orchestrator | Friday 23 May 2025 00:42:26 +0000 (0:00:00.147) 0:00:44.896 ************ 2025-05-23 00:42:26.697657 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:26.698935 | orchestrator | 2025-05-23 00:42:26.699565 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-23 00:42:26.700785 | orchestrator | Friday 23 May 2025 00:42:26 +0000 (0:00:00.138) 0:00:45.034 ************ 2025-05-23 00:42:26.860840 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:26.861019 | orchestrator | 2025-05-23 00:42:26.861507 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-23 00:42:26.862094 | orchestrator | Friday 23 May 2025 00:42:26 +0000 (0:00:00.160) 0:00:45.195 ************ 2025-05-23 00:42:27.003061 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:27.003768 | orchestrator | 2025-05-23 00:42:27.004201 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-23 00:42:27.005462 | orchestrator | Friday 23 May 2025 00:42:26 +0000 (0:00:00.144) 0:00:45.340 ************ 2025-05-23 00:42:27.155915 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:27.157483 | orchestrator | 2025-05-23 00:42:27.158463 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-23 00:42:27.159482 | orchestrator | Friday 23 May 2025 00:42:27 +0000 (0:00:00.153) 0:00:45.493 ************ 2025-05-23 00:42:27.294150 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:27.297040 | orchestrator | 2025-05-23 00:42:27.298176 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-23 00:42:27.298851 | orchestrator | Friday 23 May 2025 00:42:27 +0000 (0:00:00.137) 0:00:45.630 ************ 2025-05-23 00:42:27.605445 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:27.606274 | orchestrator | 2025-05-23 00:42:27.607145 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-23 00:42:27.609166 | orchestrator | Friday 23 May 2025 00:42:27 +0000 (0:00:00.310) 0:00:45.941 ************ 2025-05-23 00:42:27.787148 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-125adf16-eac9-5ada-96e7-bcd4f30a545d', 'data_vg': 'ceph-125adf16-eac9-5ada-96e7-bcd4f30a545d'})  2025-05-23 00:42:27.788136 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8bf3a31b-2d76-5988-bbd2-6800630d4c9a', 'data_vg': 'ceph-8bf3a31b-2d76-5988-bbd2-6800630d4c9a'})  2025-05-23 00:42:27.789666 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:27.792305 | orchestrator | 2025-05-23 00:42:27.792336 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-23 00:42:27.792388 | orchestrator | Friday 23 May 2025 00:42:27 +0000 (0:00:00.181) 0:00:46.123 ************ 2025-05-23 00:42:27.948761 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-125adf16-eac9-5ada-96e7-bcd4f30a545d', 'data_vg': 'ceph-125adf16-eac9-5ada-96e7-bcd4f30a545d'})  2025-05-23 00:42:27.949280 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8bf3a31b-2d76-5988-bbd2-6800630d4c9a', 'data_vg': 'ceph-8bf3a31b-2d76-5988-bbd2-6800630d4c9a'})  2025-05-23 00:42:27.949942 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:27.952256 | orchestrator | 2025-05-23 00:42:27.952284 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-23 00:42:27.952342 | orchestrator | Friday 23 May 2025 00:42:27 +0000 (0:00:00.162) 0:00:46.285 ************ 2025-05-23 00:42:28.138858 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-125adf16-eac9-5ada-96e7-bcd4f30a545d', 'data_vg': 'ceph-125adf16-eac9-5ada-96e7-bcd4f30a545d'})  2025-05-23 00:42:28.139688 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8bf3a31b-2d76-5988-bbd2-6800630d4c9a', 'data_vg': 'ceph-8bf3a31b-2d76-5988-bbd2-6800630d4c9a'})  2025-05-23 00:42:28.140854 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:28.141225 | orchestrator | 2025-05-23 00:42:28.141833 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-23 00:42:28.142794 | orchestrator | Friday 23 May 2025 00:42:28 +0000 (0:00:00.190) 0:00:46.475 ************ 2025-05-23 00:42:28.311910 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-125adf16-eac9-5ada-96e7-bcd4f30a545d', 'data_vg': 'ceph-125adf16-eac9-5ada-96e7-bcd4f30a545d'})  2025-05-23 00:42:28.312660 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8bf3a31b-2d76-5988-bbd2-6800630d4c9a', 'data_vg': 'ceph-8bf3a31b-2d76-5988-bbd2-6800630d4c9a'})  2025-05-23 00:42:28.313190 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:28.314509 | orchestrator | 2025-05-23 00:42:28.316392 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-23 00:42:28.318909 | orchestrator | Friday 23 May 2025 00:42:28 +0000 (0:00:00.172) 0:00:46.648 ************ 2025-05-23 00:42:28.484121 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-125adf16-eac9-5ada-96e7-bcd4f30a545d', 'data_vg': 'ceph-125adf16-eac9-5ada-96e7-bcd4f30a545d'})  2025-05-23 00:42:28.485822 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8bf3a31b-2d76-5988-bbd2-6800630d4c9a', 'data_vg': 'ceph-8bf3a31b-2d76-5988-bbd2-6800630d4c9a'})  2025-05-23 00:42:28.486242 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:28.487185 | orchestrator | 2025-05-23 00:42:28.488951 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-23 00:42:28.489278 | orchestrator | Friday 23 May 2025 00:42:28 +0000 (0:00:00.172) 0:00:46.821 ************ 2025-05-23 00:42:28.654084 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-125adf16-eac9-5ada-96e7-bcd4f30a545d', 'data_vg': 'ceph-125adf16-eac9-5ada-96e7-bcd4f30a545d'})  2025-05-23 00:42:28.655296 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8bf3a31b-2d76-5988-bbd2-6800630d4c9a', 'data_vg': 'ceph-8bf3a31b-2d76-5988-bbd2-6800630d4c9a'})  2025-05-23 00:42:28.656176 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:28.657167 | orchestrator | 2025-05-23 00:42:28.658080 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-23 00:42:28.658693 | orchestrator | Friday 23 May 2025 00:42:28 +0000 (0:00:00.169) 0:00:46.990 ************ 2025-05-23 00:42:28.847099 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-125adf16-eac9-5ada-96e7-bcd4f30a545d', 'data_vg': 'ceph-125adf16-eac9-5ada-96e7-bcd4f30a545d'})  2025-05-23 00:42:28.848394 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8bf3a31b-2d76-5988-bbd2-6800630d4c9a', 'data_vg': 'ceph-8bf3a31b-2d76-5988-bbd2-6800630d4c9a'})  2025-05-23 00:42:28.849378 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:28.850059 | orchestrator | 2025-05-23 00:42:28.850723 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-23 00:42:28.851645 | orchestrator | Friday 23 May 2025 00:42:28 +0000 (0:00:00.193) 0:00:47.183 ************ 2025-05-23 00:42:29.010539 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-125adf16-eac9-5ada-96e7-bcd4f30a545d', 'data_vg': 'ceph-125adf16-eac9-5ada-96e7-bcd4f30a545d'})  2025-05-23 00:42:29.010757 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8bf3a31b-2d76-5988-bbd2-6800630d4c9a', 'data_vg': 'ceph-8bf3a31b-2d76-5988-bbd2-6800630d4c9a'})  2025-05-23 00:42:29.011903 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:29.012184 | orchestrator | 2025-05-23 00:42:29.013339 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-23 00:42:29.014357 | orchestrator | Friday 23 May 2025 00:42:29 +0000 (0:00:00.162) 0:00:47.346 ************ 2025-05-23 00:42:29.510825 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:42:29.511318 | orchestrator | 2025-05-23 00:42:29.511846 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-23 00:42:29.512338 | orchestrator | Friday 23 May 2025 00:42:29 +0000 (0:00:00.500) 0:00:47.846 ************ 2025-05-23 00:42:30.023127 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:42:30.023323 | orchestrator | 2025-05-23 00:42:30.025031 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-23 00:42:30.027285 | orchestrator | Friday 23 May 2025 00:42:30 +0000 (0:00:00.512) 0:00:48.359 ************ 2025-05-23 00:42:30.340441 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:42:30.341957 | orchestrator | 2025-05-23 00:42:30.345175 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-23 00:42:30.345230 | orchestrator | Friday 23 May 2025 00:42:30 +0000 (0:00:00.317) 0:00:48.677 ************ 2025-05-23 00:42:30.519872 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-125adf16-eac9-5ada-96e7-bcd4f30a545d', 'vg_name': 'ceph-125adf16-eac9-5ada-96e7-bcd4f30a545d'}) 2025-05-23 00:42:30.520813 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-8bf3a31b-2d76-5988-bbd2-6800630d4c9a', 'vg_name': 'ceph-8bf3a31b-2d76-5988-bbd2-6800630d4c9a'}) 2025-05-23 00:42:30.520977 | orchestrator | 2025-05-23 00:42:30.521725 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-23 00:42:30.522532 | orchestrator | Friday 23 May 2025 00:42:30 +0000 (0:00:00.179) 0:00:48.856 ************ 2025-05-23 00:42:30.694468 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-125adf16-eac9-5ada-96e7-bcd4f30a545d', 'data_vg': 'ceph-125adf16-eac9-5ada-96e7-bcd4f30a545d'})  2025-05-23 00:42:30.694663 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8bf3a31b-2d76-5988-bbd2-6800630d4c9a', 'data_vg': 'ceph-8bf3a31b-2d76-5988-bbd2-6800630d4c9a'})  2025-05-23 00:42:30.695944 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:30.698710 | orchestrator | 2025-05-23 00:42:30.698792 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-23 00:42:30.698809 | orchestrator | Friday 23 May 2025 00:42:30 +0000 (0:00:00.173) 0:00:49.030 ************ 2025-05-23 00:42:30.856405 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-125adf16-eac9-5ada-96e7-bcd4f30a545d', 'data_vg': 'ceph-125adf16-eac9-5ada-96e7-bcd4f30a545d'})  2025-05-23 00:42:30.857138 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8bf3a31b-2d76-5988-bbd2-6800630d4c9a', 'data_vg': 'ceph-8bf3a31b-2d76-5988-bbd2-6800630d4c9a'})  2025-05-23 00:42:30.857656 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:30.861088 | orchestrator | 2025-05-23 00:42:30.862436 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-23 00:42:30.865352 | orchestrator | Friday 23 May 2025 00:42:30 +0000 (0:00:00.160) 0:00:49.191 ************ 2025-05-23 00:42:31.046219 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-125adf16-eac9-5ada-96e7-bcd4f30a545d', 'data_vg': 'ceph-125adf16-eac9-5ada-96e7-bcd4f30a545d'})  2025-05-23 00:42:31.046311 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8bf3a31b-2d76-5988-bbd2-6800630d4c9a', 'data_vg': 'ceph-8bf3a31b-2d76-5988-bbd2-6800630d4c9a'})  2025-05-23 00:42:31.046398 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:42:31.047933 | orchestrator | 2025-05-23 00:42:31.049257 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-23 00:42:31.050375 | orchestrator | Friday 23 May 2025 00:42:31 +0000 (0:00:00.189) 0:00:49.381 ************ 2025-05-23 00:42:31.885631 | orchestrator | ok: [testbed-node-4] => { 2025-05-23 00:42:31.886247 | orchestrator |  "lvm_report": { 2025-05-23 00:42:31.886873 | orchestrator |  "lv": [ 2025-05-23 00:42:31.887572 | orchestrator |  { 2025-05-23 00:42:31.888166 | orchestrator |  "lv_name": "osd-block-125adf16-eac9-5ada-96e7-bcd4f30a545d", 2025-05-23 00:42:31.888908 | orchestrator |  "vg_name": "ceph-125adf16-eac9-5ada-96e7-bcd4f30a545d" 2025-05-23 00:42:31.889937 | orchestrator |  }, 2025-05-23 00:42:31.890123 | orchestrator |  { 2025-05-23 00:42:31.890631 | orchestrator |  "lv_name": "osd-block-8bf3a31b-2d76-5988-bbd2-6800630d4c9a", 2025-05-23 00:42:31.891255 | orchestrator |  "vg_name": "ceph-8bf3a31b-2d76-5988-bbd2-6800630d4c9a" 2025-05-23 00:42:31.891569 | orchestrator |  } 2025-05-23 00:42:31.892295 | orchestrator |  ], 2025-05-23 00:42:31.892704 | orchestrator |  "pv": [ 2025-05-23 00:42:31.893299 | orchestrator |  { 2025-05-23 00:42:31.893699 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-23 00:42:31.894174 | orchestrator |  "vg_name": "ceph-125adf16-eac9-5ada-96e7-bcd4f30a545d" 2025-05-23 00:42:31.894755 | orchestrator |  }, 2025-05-23 00:42:31.895225 | orchestrator |  { 2025-05-23 00:42:31.895666 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-23 00:42:31.895856 | orchestrator |  "vg_name": "ceph-8bf3a31b-2d76-5988-bbd2-6800630d4c9a" 2025-05-23 00:42:31.896442 | orchestrator |  } 2025-05-23 00:42:31.896715 | orchestrator |  ] 2025-05-23 00:42:31.897031 | orchestrator |  } 2025-05-23 00:42:31.897434 | orchestrator | } 2025-05-23 00:42:31.897645 | orchestrator | 2025-05-23 00:42:31.898114 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-23 00:42:31.898345 | orchestrator | 2025-05-23 00:42:31.898835 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-23 00:42:31.899207 | orchestrator | Friday 23 May 2025 00:42:31 +0000 (0:00:00.839) 0:00:50.220 ************ 2025-05-23 00:42:32.126079 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-23 00:42:32.126170 | orchestrator | 2025-05-23 00:42:32.127457 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-23 00:42:32.128001 | orchestrator | Friday 23 May 2025 00:42:32 +0000 (0:00:00.242) 0:00:50.462 ************ 2025-05-23 00:42:32.366188 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:42:32.367736 | orchestrator | 2025-05-23 00:42:32.368800 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:42:32.370135 | orchestrator | Friday 23 May 2025 00:42:32 +0000 (0:00:00.239) 0:00:50.702 ************ 2025-05-23 00:42:32.816957 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-05-23 00:42:32.817062 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-05-23 00:42:32.817136 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-05-23 00:42:32.817509 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-05-23 00:42:32.820453 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-05-23 00:42:32.820907 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-05-23 00:42:32.821356 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-05-23 00:42:32.821865 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-05-23 00:42:32.822395 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-05-23 00:42:32.822874 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-05-23 00:42:32.823303 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-05-23 00:42:32.823793 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-05-23 00:42:32.824205 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-05-23 00:42:32.824662 | orchestrator | 2025-05-23 00:42:32.825181 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:42:32.825564 | orchestrator | Friday 23 May 2025 00:42:32 +0000 (0:00:00.448) 0:00:51.151 ************ 2025-05-23 00:42:33.014844 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:33.015030 | orchestrator | 2025-05-23 00:42:33.015736 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:42:33.016488 | orchestrator | Friday 23 May 2025 00:42:33 +0000 (0:00:00.199) 0:00:51.351 ************ 2025-05-23 00:42:33.219033 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:33.219607 | orchestrator | 2025-05-23 00:42:33.220546 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:42:33.221497 | orchestrator | Friday 23 May 2025 00:42:33 +0000 (0:00:00.203) 0:00:51.555 ************ 2025-05-23 00:42:33.431579 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:33.431792 | orchestrator | 2025-05-23 00:42:33.433248 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:42:33.434103 | orchestrator | Friday 23 May 2025 00:42:33 +0000 (0:00:00.212) 0:00:51.767 ************ 2025-05-23 00:42:33.640437 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:33.641024 | orchestrator | 2025-05-23 00:42:33.641656 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:42:33.642483 | orchestrator | Friday 23 May 2025 00:42:33 +0000 (0:00:00.209) 0:00:51.976 ************ 2025-05-23 00:42:34.235241 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:34.237553 | orchestrator | 2025-05-23 00:42:34.238814 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:42:34.240267 | orchestrator | Friday 23 May 2025 00:42:34 +0000 (0:00:00.593) 0:00:52.570 ************ 2025-05-23 00:42:34.448096 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:34.448181 | orchestrator | 2025-05-23 00:42:34.448191 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:42:34.448660 | orchestrator | Friday 23 May 2025 00:42:34 +0000 (0:00:00.212) 0:00:52.782 ************ 2025-05-23 00:42:34.653876 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:34.655143 | orchestrator | 2025-05-23 00:42:34.656876 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:42:34.658341 | orchestrator | Friday 23 May 2025 00:42:34 +0000 (0:00:00.208) 0:00:52.990 ************ 2025-05-23 00:42:34.859670 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:34.860725 | orchestrator | 2025-05-23 00:42:34.861911 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:42:34.863455 | orchestrator | Friday 23 May 2025 00:42:34 +0000 (0:00:00.205) 0:00:53.195 ************ 2025-05-23 00:42:35.262095 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_c8efe3c1-6307-4e01-8bfc-afd4fa6a2572) 2025-05-23 00:42:35.262346 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_c8efe3c1-6307-4e01-8bfc-afd4fa6a2572) 2025-05-23 00:42:35.262360 | orchestrator | 2025-05-23 00:42:35.262635 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:42:35.263229 | orchestrator | Friday 23 May 2025 00:42:35 +0000 (0:00:00.403) 0:00:53.599 ************ 2025-05-23 00:42:35.727593 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_18473d69-2fd0-4937-9240-f5fad34c2ed7) 2025-05-23 00:42:35.727750 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_18473d69-2fd0-4937-9240-f5fad34c2ed7) 2025-05-23 00:42:35.727831 | orchestrator | 2025-05-23 00:42:35.730544 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:42:35.730767 | orchestrator | Friday 23 May 2025 00:42:35 +0000 (0:00:00.464) 0:00:54.063 ************ 2025-05-23 00:42:36.170810 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5f24398e-55ab-4e45-a360-e924ed2b4127) 2025-05-23 00:42:36.171160 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5f24398e-55ab-4e45-a360-e924ed2b4127) 2025-05-23 00:42:36.174345 | orchestrator | 2025-05-23 00:42:36.174375 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:42:36.175107 | orchestrator | Friday 23 May 2025 00:42:36 +0000 (0:00:00.443) 0:00:54.507 ************ 2025-05-23 00:42:36.622202 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_329d29a6-e648-44c1-9803-5cc5abc56db6) 2025-05-23 00:42:36.623083 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_329d29a6-e648-44c1-9803-5cc5abc56db6) 2025-05-23 00:42:36.623862 | orchestrator | 2025-05-23 00:42:36.624844 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-23 00:42:36.625251 | orchestrator | Friday 23 May 2025 00:42:36 +0000 (0:00:00.450) 0:00:54.957 ************ 2025-05-23 00:42:36.958908 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-23 00:42:36.959027 | orchestrator | 2025-05-23 00:42:36.960564 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:42:36.960591 | orchestrator | Friday 23 May 2025 00:42:36 +0000 (0:00:00.338) 0:00:55.295 ************ 2025-05-23 00:42:37.665078 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-05-23 00:42:37.665181 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-05-23 00:42:37.666454 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-05-23 00:42:37.667734 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-05-23 00:42:37.671633 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-05-23 00:42:37.672324 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-05-23 00:42:37.673116 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-05-23 00:42:37.673898 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-05-23 00:42:37.677233 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-05-23 00:42:37.678079 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-05-23 00:42:37.681242 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-05-23 00:42:37.681601 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-05-23 00:42:37.682097 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-05-23 00:42:37.685045 | orchestrator | 2025-05-23 00:42:37.685339 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:42:37.685813 | orchestrator | Friday 23 May 2025 00:42:37 +0000 (0:00:00.704) 0:00:56.000 ************ 2025-05-23 00:42:37.894378 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:37.894854 | orchestrator | 2025-05-23 00:42:37.895658 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:42:37.896855 | orchestrator | Friday 23 May 2025 00:42:37 +0000 (0:00:00.230) 0:00:56.230 ************ 2025-05-23 00:42:38.114074 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:38.114555 | orchestrator | 2025-05-23 00:42:38.115523 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:42:38.116301 | orchestrator | Friday 23 May 2025 00:42:38 +0000 (0:00:00.218) 0:00:56.449 ************ 2025-05-23 00:42:38.334281 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:38.334590 | orchestrator | 2025-05-23 00:42:38.335998 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:42:38.336814 | orchestrator | Friday 23 May 2025 00:42:38 +0000 (0:00:00.222) 0:00:56.671 ************ 2025-05-23 00:42:38.531490 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:38.532323 | orchestrator | 2025-05-23 00:42:38.532802 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:42:38.533396 | orchestrator | Friday 23 May 2025 00:42:38 +0000 (0:00:00.196) 0:00:56.868 ************ 2025-05-23 00:42:38.735238 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:38.736318 | orchestrator | 2025-05-23 00:42:38.740381 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:42:38.740418 | orchestrator | Friday 23 May 2025 00:42:38 +0000 (0:00:00.199) 0:00:57.067 ************ 2025-05-23 00:42:38.927405 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:38.927912 | orchestrator | 2025-05-23 00:42:38.927940 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:42:38.928127 | orchestrator | Friday 23 May 2025 00:42:38 +0000 (0:00:00.197) 0:00:57.264 ************ 2025-05-23 00:42:39.155228 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:39.155655 | orchestrator | 2025-05-23 00:42:39.157665 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:42:39.157716 | orchestrator | Friday 23 May 2025 00:42:39 +0000 (0:00:00.226) 0:00:57.490 ************ 2025-05-23 00:42:39.355067 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:39.356347 | orchestrator | 2025-05-23 00:42:39.357536 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:42:39.358808 | orchestrator | Friday 23 May 2025 00:42:39 +0000 (0:00:00.200) 0:00:57.691 ************ 2025-05-23 00:42:40.207013 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-05-23 00:42:40.207133 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-05-23 00:42:40.207216 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-05-23 00:42:40.208221 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-05-23 00:42:40.211245 | orchestrator | 2025-05-23 00:42:40.211855 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:42:40.212517 | orchestrator | Friday 23 May 2025 00:42:40 +0000 (0:00:00.850) 0:00:58.542 ************ 2025-05-23 00:42:40.409600 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:40.409815 | orchestrator | 2025-05-23 00:42:40.411425 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:42:40.414662 | orchestrator | Friday 23 May 2025 00:42:40 +0000 (0:00:00.203) 0:00:58.746 ************ 2025-05-23 00:42:41.000129 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:41.000297 | orchestrator | 2025-05-23 00:42:41.000743 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:42:41.001526 | orchestrator | Friday 23 May 2025 00:42:40 +0000 (0:00:00.590) 0:00:59.336 ************ 2025-05-23 00:42:41.206647 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:41.206805 | orchestrator | 2025-05-23 00:42:41.207841 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-23 00:42:41.208831 | orchestrator | Friday 23 May 2025 00:42:41 +0000 (0:00:00.206) 0:00:59.543 ************ 2025-05-23 00:42:41.388449 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:41.389035 | orchestrator | 2025-05-23 00:42:41.390566 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-23 00:42:41.391520 | orchestrator | Friday 23 May 2025 00:42:41 +0000 (0:00:00.182) 0:00:59.725 ************ 2025-05-23 00:42:41.518840 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:41.519538 | orchestrator | 2025-05-23 00:42:41.520573 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-23 00:42:41.521756 | orchestrator | Friday 23 May 2025 00:42:41 +0000 (0:00:00.129) 0:00:59.855 ************ 2025-05-23 00:42:41.721731 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1c1d7620-81eb-54f7-8ffb-e9df7a8995e0'}}) 2025-05-23 00:42:41.722719 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'dafe69f8-630b-5486-ba76-590e0b4d1820'}}) 2025-05-23 00:42:41.723163 | orchestrator | 2025-05-23 00:42:41.726283 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-23 00:42:41.726308 | orchestrator | Friday 23 May 2025 00:42:41 +0000 (0:00:00.202) 0:01:00.057 ************ 2025-05-23 00:42:43.584867 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-1c1d7620-81eb-54f7-8ffb-e9df7a8995e0', 'data_vg': 'ceph-1c1d7620-81eb-54f7-8ffb-e9df7a8995e0'}) 2025-05-23 00:42:43.585967 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-dafe69f8-630b-5486-ba76-590e0b4d1820', 'data_vg': 'ceph-dafe69f8-630b-5486-ba76-590e0b4d1820'}) 2025-05-23 00:42:43.587170 | orchestrator | 2025-05-23 00:42:43.588426 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-23 00:42:43.589514 | orchestrator | Friday 23 May 2025 00:42:43 +0000 (0:00:01.862) 0:01:01.920 ************ 2025-05-23 00:42:43.774994 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1c1d7620-81eb-54f7-8ffb-e9df7a8995e0', 'data_vg': 'ceph-1c1d7620-81eb-54f7-8ffb-e9df7a8995e0'})  2025-05-23 00:42:43.775125 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dafe69f8-630b-5486-ba76-590e0b4d1820', 'data_vg': 'ceph-dafe69f8-630b-5486-ba76-590e0b4d1820'})  2025-05-23 00:42:43.775139 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:43.775152 | orchestrator | 2025-05-23 00:42:43.775163 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-23 00:42:43.775525 | orchestrator | Friday 23 May 2025 00:42:43 +0000 (0:00:00.177) 0:01:02.097 ************ 2025-05-23 00:42:45.093167 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-1c1d7620-81eb-54f7-8ffb-e9df7a8995e0', 'data_vg': 'ceph-1c1d7620-81eb-54f7-8ffb-e9df7a8995e0'}) 2025-05-23 00:42:45.096148 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-dafe69f8-630b-5486-ba76-590e0b4d1820', 'data_vg': 'ceph-dafe69f8-630b-5486-ba76-590e0b4d1820'}) 2025-05-23 00:42:45.097123 | orchestrator | 2025-05-23 00:42:45.097992 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-23 00:42:45.098630 | orchestrator | Friday 23 May 2025 00:42:45 +0000 (0:00:01.330) 0:01:03.428 ************ 2025-05-23 00:42:45.464018 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1c1d7620-81eb-54f7-8ffb-e9df7a8995e0', 'data_vg': 'ceph-1c1d7620-81eb-54f7-8ffb-e9df7a8995e0'})  2025-05-23 00:42:45.464646 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dafe69f8-630b-5486-ba76-590e0b4d1820', 'data_vg': 'ceph-dafe69f8-630b-5486-ba76-590e0b4d1820'})  2025-05-23 00:42:45.465553 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:45.467867 | orchestrator | 2025-05-23 00:42:45.467899 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-23 00:42:45.467913 | orchestrator | Friday 23 May 2025 00:42:45 +0000 (0:00:00.371) 0:01:03.800 ************ 2025-05-23 00:42:45.606402 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:45.606529 | orchestrator | 2025-05-23 00:42:45.607667 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-23 00:42:45.608788 | orchestrator | Friday 23 May 2025 00:42:45 +0000 (0:00:00.142) 0:01:03.942 ************ 2025-05-23 00:42:45.770750 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1c1d7620-81eb-54f7-8ffb-e9df7a8995e0', 'data_vg': 'ceph-1c1d7620-81eb-54f7-8ffb-e9df7a8995e0'})  2025-05-23 00:42:45.776347 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dafe69f8-630b-5486-ba76-590e0b4d1820', 'data_vg': 'ceph-dafe69f8-630b-5486-ba76-590e0b4d1820'})  2025-05-23 00:42:45.776389 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:45.776403 | orchestrator | 2025-05-23 00:42:45.776559 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-23 00:42:45.777182 | orchestrator | Friday 23 May 2025 00:42:45 +0000 (0:00:00.159) 0:01:04.102 ************ 2025-05-23 00:42:45.909054 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:45.909124 | orchestrator | 2025-05-23 00:42:45.909174 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-23 00:42:45.910109 | orchestrator | Friday 23 May 2025 00:42:45 +0000 (0:00:00.144) 0:01:04.247 ************ 2025-05-23 00:42:46.067515 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1c1d7620-81eb-54f7-8ffb-e9df7a8995e0', 'data_vg': 'ceph-1c1d7620-81eb-54f7-8ffb-e9df7a8995e0'})  2025-05-23 00:42:46.067598 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dafe69f8-630b-5486-ba76-590e0b4d1820', 'data_vg': 'ceph-dafe69f8-630b-5486-ba76-590e0b4d1820'})  2025-05-23 00:42:46.069501 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:46.069581 | orchestrator | 2025-05-23 00:42:46.069676 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-23 00:42:46.069802 | orchestrator | Friday 23 May 2025 00:42:46 +0000 (0:00:00.156) 0:01:04.403 ************ 2025-05-23 00:42:46.200631 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:46.200749 | orchestrator | 2025-05-23 00:42:46.200764 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-23 00:42:46.200806 | orchestrator | Friday 23 May 2025 00:42:46 +0000 (0:00:00.135) 0:01:04.538 ************ 2025-05-23 00:42:46.326971 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1c1d7620-81eb-54f7-8ffb-e9df7a8995e0', 'data_vg': 'ceph-1c1d7620-81eb-54f7-8ffb-e9df7a8995e0'})  2025-05-23 00:42:46.327137 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dafe69f8-630b-5486-ba76-590e0b4d1820', 'data_vg': 'ceph-dafe69f8-630b-5486-ba76-590e0b4d1820'})  2025-05-23 00:42:46.327214 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:46.328451 | orchestrator | 2025-05-23 00:42:46.332199 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-23 00:42:46.332248 | orchestrator | Friday 23 May 2025 00:42:46 +0000 (0:00:00.126) 0:01:04.665 ************ 2025-05-23 00:42:46.448253 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:42:46.448897 | orchestrator | 2025-05-23 00:42:46.449776 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-23 00:42:46.452565 | orchestrator | Friday 23 May 2025 00:42:46 +0000 (0:00:00.120) 0:01:04.786 ************ 2025-05-23 00:42:46.630641 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1c1d7620-81eb-54f7-8ffb-e9df7a8995e0', 'data_vg': 'ceph-1c1d7620-81eb-54f7-8ffb-e9df7a8995e0'})  2025-05-23 00:42:46.631096 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dafe69f8-630b-5486-ba76-590e0b4d1820', 'data_vg': 'ceph-dafe69f8-630b-5486-ba76-590e0b4d1820'})  2025-05-23 00:42:46.631635 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:46.632544 | orchestrator | 2025-05-23 00:42:46.633614 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-23 00:42:46.633964 | orchestrator | Friday 23 May 2025 00:42:46 +0000 (0:00:00.181) 0:01:04.967 ************ 2025-05-23 00:42:46.788450 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1c1d7620-81eb-54f7-8ffb-e9df7a8995e0', 'data_vg': 'ceph-1c1d7620-81eb-54f7-8ffb-e9df7a8995e0'})  2025-05-23 00:42:46.788602 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dafe69f8-630b-5486-ba76-590e0b4d1820', 'data_vg': 'ceph-dafe69f8-630b-5486-ba76-590e0b4d1820'})  2025-05-23 00:42:46.790948 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:46.791157 | orchestrator | 2025-05-23 00:42:46.791661 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-23 00:42:46.791681 | orchestrator | Friday 23 May 2025 00:42:46 +0000 (0:00:00.157) 0:01:05.125 ************ 2025-05-23 00:42:46.926647 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1c1d7620-81eb-54f7-8ffb-e9df7a8995e0', 'data_vg': 'ceph-1c1d7620-81eb-54f7-8ffb-e9df7a8995e0'})  2025-05-23 00:42:46.927014 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dafe69f8-630b-5486-ba76-590e0b4d1820', 'data_vg': 'ceph-dafe69f8-630b-5486-ba76-590e0b4d1820'})  2025-05-23 00:42:46.927832 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:46.928160 | orchestrator | 2025-05-23 00:42:46.928923 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-23 00:42:46.929242 | orchestrator | Friday 23 May 2025 00:42:46 +0000 (0:00:00.139) 0:01:05.264 ************ 2025-05-23 00:42:47.055431 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:47.055802 | orchestrator | 2025-05-23 00:42:47.056499 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-23 00:42:47.056877 | orchestrator | Friday 23 May 2025 00:42:47 +0000 (0:00:00.128) 0:01:05.393 ************ 2025-05-23 00:42:47.346486 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:47.346596 | orchestrator | 2025-05-23 00:42:47.346765 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-23 00:42:47.347408 | orchestrator | Friday 23 May 2025 00:42:47 +0000 (0:00:00.291) 0:01:05.684 ************ 2025-05-23 00:42:47.480666 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:47.480846 | orchestrator | 2025-05-23 00:42:47.481061 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-23 00:42:47.481449 | orchestrator | Friday 23 May 2025 00:42:47 +0000 (0:00:00.133) 0:01:05.818 ************ 2025-05-23 00:42:47.609041 | orchestrator | ok: [testbed-node-5] => { 2025-05-23 00:42:47.609127 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-23 00:42:47.609385 | orchestrator | } 2025-05-23 00:42:47.609769 | orchestrator | 2025-05-23 00:42:47.610124 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-23 00:42:47.610494 | orchestrator | Friday 23 May 2025 00:42:47 +0000 (0:00:00.128) 0:01:05.947 ************ 2025-05-23 00:42:47.745261 | orchestrator | ok: [testbed-node-5] => { 2025-05-23 00:42:47.745389 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-23 00:42:47.745518 | orchestrator | } 2025-05-23 00:42:47.745854 | orchestrator | 2025-05-23 00:42:47.746244 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-23 00:42:47.748423 | orchestrator | Friday 23 May 2025 00:42:47 +0000 (0:00:00.135) 0:01:06.082 ************ 2025-05-23 00:42:47.858319 | orchestrator | ok: [testbed-node-5] => { 2025-05-23 00:42:47.858646 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-23 00:42:47.858675 | orchestrator | } 2025-05-23 00:42:47.859614 | orchestrator | 2025-05-23 00:42:47.859921 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-23 00:42:47.860924 | orchestrator | Friday 23 May 2025 00:42:47 +0000 (0:00:00.113) 0:01:06.196 ************ 2025-05-23 00:42:48.337203 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:42:48.337580 | orchestrator | 2025-05-23 00:42:48.338110 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-23 00:42:48.338528 | orchestrator | Friday 23 May 2025 00:42:48 +0000 (0:00:00.478) 0:01:06.675 ************ 2025-05-23 00:42:48.835762 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:42:48.836220 | orchestrator | 2025-05-23 00:42:48.837096 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-23 00:42:48.838088 | orchestrator | Friday 23 May 2025 00:42:48 +0000 (0:00:00.497) 0:01:07.172 ************ 2025-05-23 00:42:49.361507 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:42:49.361720 | orchestrator | 2025-05-23 00:42:49.362199 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-23 00:42:49.362864 | orchestrator | Friday 23 May 2025 00:42:49 +0000 (0:00:00.525) 0:01:07.697 ************ 2025-05-23 00:42:49.509109 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:42:49.510077 | orchestrator | 2025-05-23 00:42:49.511029 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-23 00:42:49.511929 | orchestrator | Friday 23 May 2025 00:42:49 +0000 (0:00:00.149) 0:01:07.846 ************ 2025-05-23 00:42:49.622662 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:49.622799 | orchestrator | 2025-05-23 00:42:49.623101 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-23 00:42:49.624222 | orchestrator | Friday 23 May 2025 00:42:49 +0000 (0:00:00.113) 0:01:07.960 ************ 2025-05-23 00:42:49.720746 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:49.720910 | orchestrator | 2025-05-23 00:42:49.720969 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-23 00:42:49.721433 | orchestrator | Friday 23 May 2025 00:42:49 +0000 (0:00:00.097) 0:01:08.058 ************ 2025-05-23 00:42:49.982164 | orchestrator | ok: [testbed-node-5] => { 2025-05-23 00:42:49.982624 | orchestrator |  "vgs_report": { 2025-05-23 00:42:49.983159 | orchestrator |  "vg": [] 2025-05-23 00:42:49.983385 | orchestrator |  } 2025-05-23 00:42:49.983882 | orchestrator | } 2025-05-23 00:42:49.984180 | orchestrator | 2025-05-23 00:42:49.984932 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-23 00:42:49.986089 | orchestrator | Friday 23 May 2025 00:42:49 +0000 (0:00:00.261) 0:01:08.320 ************ 2025-05-23 00:42:50.109198 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:50.109679 | orchestrator | 2025-05-23 00:42:50.111561 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-23 00:42:50.111601 | orchestrator | Friday 23 May 2025 00:42:50 +0000 (0:00:00.125) 0:01:08.446 ************ 2025-05-23 00:42:50.238230 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:50.238944 | orchestrator | 2025-05-23 00:42:50.241084 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-23 00:42:50.241681 | orchestrator | Friday 23 May 2025 00:42:50 +0000 (0:00:00.129) 0:01:08.575 ************ 2025-05-23 00:42:50.369903 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:50.370183 | orchestrator | 2025-05-23 00:42:50.370794 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-23 00:42:50.371517 | orchestrator | Friday 23 May 2025 00:42:50 +0000 (0:00:00.130) 0:01:08.706 ************ 2025-05-23 00:42:50.491277 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:50.491793 | orchestrator | 2025-05-23 00:42:50.492257 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-23 00:42:50.493652 | orchestrator | Friday 23 May 2025 00:42:50 +0000 (0:00:00.122) 0:01:08.828 ************ 2025-05-23 00:42:50.605608 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:50.606537 | orchestrator | 2025-05-23 00:42:50.607290 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-23 00:42:50.608841 | orchestrator | Friday 23 May 2025 00:42:50 +0000 (0:00:00.113) 0:01:08.942 ************ 2025-05-23 00:42:50.747739 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:50.748654 | orchestrator | 2025-05-23 00:42:50.749411 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-23 00:42:50.750230 | orchestrator | Friday 23 May 2025 00:42:50 +0000 (0:00:00.141) 0:01:09.084 ************ 2025-05-23 00:42:50.870135 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:50.870524 | orchestrator | 2025-05-23 00:42:50.871581 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-23 00:42:50.872306 | orchestrator | Friday 23 May 2025 00:42:50 +0000 (0:00:00.123) 0:01:09.207 ************ 2025-05-23 00:42:51.019407 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:51.020399 | orchestrator | 2025-05-23 00:42:51.022996 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-23 00:42:51.023016 | orchestrator | Friday 23 May 2025 00:42:51 +0000 (0:00:00.149) 0:01:09.357 ************ 2025-05-23 00:42:51.136455 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:51.136622 | orchestrator | 2025-05-23 00:42:51.137418 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-23 00:42:51.137968 | orchestrator | Friday 23 May 2025 00:42:51 +0000 (0:00:00.116) 0:01:09.474 ************ 2025-05-23 00:42:51.276947 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:51.277441 | orchestrator | 2025-05-23 00:42:51.278476 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-23 00:42:51.279280 | orchestrator | Friday 23 May 2025 00:42:51 +0000 (0:00:00.140) 0:01:09.614 ************ 2025-05-23 00:42:51.409282 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:51.409825 | orchestrator | 2025-05-23 00:42:51.410817 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-23 00:42:51.411529 | orchestrator | Friday 23 May 2025 00:42:51 +0000 (0:00:00.132) 0:01:09.747 ************ 2025-05-23 00:42:51.659789 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:51.661220 | orchestrator | 2025-05-23 00:42:51.661254 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-23 00:42:51.661372 | orchestrator | Friday 23 May 2025 00:42:51 +0000 (0:00:00.248) 0:01:09.996 ************ 2025-05-23 00:42:51.789778 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:51.790534 | orchestrator | 2025-05-23 00:42:51.791127 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-23 00:42:51.793393 | orchestrator | Friday 23 May 2025 00:42:51 +0000 (0:00:00.129) 0:01:10.126 ************ 2025-05-23 00:42:51.906472 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:51.906833 | orchestrator | 2025-05-23 00:42:51.906863 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-23 00:42:51.906876 | orchestrator | Friday 23 May 2025 00:42:51 +0000 (0:00:00.117) 0:01:10.243 ************ 2025-05-23 00:42:52.059106 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1c1d7620-81eb-54f7-8ffb-e9df7a8995e0', 'data_vg': 'ceph-1c1d7620-81eb-54f7-8ffb-e9df7a8995e0'})  2025-05-23 00:42:52.059235 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dafe69f8-630b-5486-ba76-590e0b4d1820', 'data_vg': 'ceph-dafe69f8-630b-5486-ba76-590e0b4d1820'})  2025-05-23 00:42:52.059312 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:52.059415 | orchestrator | 2025-05-23 00:42:52.059899 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-23 00:42:52.060625 | orchestrator | Friday 23 May 2025 00:42:52 +0000 (0:00:00.153) 0:01:10.396 ************ 2025-05-23 00:42:52.190990 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1c1d7620-81eb-54f7-8ffb-e9df7a8995e0', 'data_vg': 'ceph-1c1d7620-81eb-54f7-8ffb-e9df7a8995e0'})  2025-05-23 00:42:52.191861 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dafe69f8-630b-5486-ba76-590e0b4d1820', 'data_vg': 'ceph-dafe69f8-630b-5486-ba76-590e0b4d1820'})  2025-05-23 00:42:52.192576 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:52.193072 | orchestrator | 2025-05-23 00:42:52.193635 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-23 00:42:52.196099 | orchestrator | Friday 23 May 2025 00:42:52 +0000 (0:00:00.132) 0:01:10.529 ************ 2025-05-23 00:42:52.346664 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1c1d7620-81eb-54f7-8ffb-e9df7a8995e0', 'data_vg': 'ceph-1c1d7620-81eb-54f7-8ffb-e9df7a8995e0'})  2025-05-23 00:42:52.347109 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dafe69f8-630b-5486-ba76-590e0b4d1820', 'data_vg': 'ceph-dafe69f8-630b-5486-ba76-590e0b4d1820'})  2025-05-23 00:42:52.347931 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:52.350186 | orchestrator | 2025-05-23 00:42:52.350465 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-23 00:42:52.350901 | orchestrator | Friday 23 May 2025 00:42:52 +0000 (0:00:00.155) 0:01:10.684 ************ 2025-05-23 00:42:52.513525 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1c1d7620-81eb-54f7-8ffb-e9df7a8995e0', 'data_vg': 'ceph-1c1d7620-81eb-54f7-8ffb-e9df7a8995e0'})  2025-05-23 00:42:52.513604 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dafe69f8-630b-5486-ba76-590e0b4d1820', 'data_vg': 'ceph-dafe69f8-630b-5486-ba76-590e0b4d1820'})  2025-05-23 00:42:52.514311 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:52.515157 | orchestrator | 2025-05-23 00:42:52.515629 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-23 00:42:52.516452 | orchestrator | Friday 23 May 2025 00:42:52 +0000 (0:00:00.165) 0:01:10.850 ************ 2025-05-23 00:42:52.688468 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1c1d7620-81eb-54f7-8ffb-e9df7a8995e0', 'data_vg': 'ceph-1c1d7620-81eb-54f7-8ffb-e9df7a8995e0'})  2025-05-23 00:42:52.688540 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dafe69f8-630b-5486-ba76-590e0b4d1820', 'data_vg': 'ceph-dafe69f8-630b-5486-ba76-590e0b4d1820'})  2025-05-23 00:42:52.688792 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:52.689154 | orchestrator | 2025-05-23 00:42:52.689593 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-23 00:42:52.689857 | orchestrator | Friday 23 May 2025 00:42:52 +0000 (0:00:00.176) 0:01:11.027 ************ 2025-05-23 00:42:52.846453 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1c1d7620-81eb-54f7-8ffb-e9df7a8995e0', 'data_vg': 'ceph-1c1d7620-81eb-54f7-8ffb-e9df7a8995e0'})  2025-05-23 00:42:52.847625 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dafe69f8-630b-5486-ba76-590e0b4d1820', 'data_vg': 'ceph-dafe69f8-630b-5486-ba76-590e0b4d1820'})  2025-05-23 00:42:52.847986 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:52.850305 | orchestrator | 2025-05-23 00:42:52.850354 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-23 00:42:52.850368 | orchestrator | Friday 23 May 2025 00:42:52 +0000 (0:00:00.156) 0:01:11.183 ************ 2025-05-23 00:42:53.005619 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1c1d7620-81eb-54f7-8ffb-e9df7a8995e0', 'data_vg': 'ceph-1c1d7620-81eb-54f7-8ffb-e9df7a8995e0'})  2025-05-23 00:42:53.005853 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dafe69f8-630b-5486-ba76-590e0b4d1820', 'data_vg': 'ceph-dafe69f8-630b-5486-ba76-590e0b4d1820'})  2025-05-23 00:42:53.005875 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:53.006361 | orchestrator | 2025-05-23 00:42:53.006907 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-23 00:42:53.007212 | orchestrator | Friday 23 May 2025 00:42:53 +0000 (0:00:00.159) 0:01:11.343 ************ 2025-05-23 00:42:53.151404 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1c1d7620-81eb-54f7-8ffb-e9df7a8995e0', 'data_vg': 'ceph-1c1d7620-81eb-54f7-8ffb-e9df7a8995e0'})  2025-05-23 00:42:53.151550 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dafe69f8-630b-5486-ba76-590e0b4d1820', 'data_vg': 'ceph-dafe69f8-630b-5486-ba76-590e0b4d1820'})  2025-05-23 00:42:53.152086 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:53.152666 | orchestrator | 2025-05-23 00:42:53.153358 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-23 00:42:53.153843 | orchestrator | Friday 23 May 2025 00:42:53 +0000 (0:00:00.145) 0:01:11.489 ************ 2025-05-23 00:42:53.790174 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:42:53.790258 | orchestrator | 2025-05-23 00:42:53.790273 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-23 00:42:53.791594 | orchestrator | Friday 23 May 2025 00:42:53 +0000 (0:00:00.634) 0:01:12.123 ************ 2025-05-23 00:42:54.278392 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:42:54.278863 | orchestrator | 2025-05-23 00:42:54.280128 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-23 00:42:54.280992 | orchestrator | Friday 23 May 2025 00:42:54 +0000 (0:00:00.491) 0:01:12.615 ************ 2025-05-23 00:42:54.428673 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:42:54.429339 | orchestrator | 2025-05-23 00:42:54.430160 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-23 00:42:54.430969 | orchestrator | Friday 23 May 2025 00:42:54 +0000 (0:00:00.150) 0:01:12.765 ************ 2025-05-23 00:42:54.627532 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-1c1d7620-81eb-54f7-8ffb-e9df7a8995e0', 'vg_name': 'ceph-1c1d7620-81eb-54f7-8ffb-e9df7a8995e0'}) 2025-05-23 00:42:54.627650 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-dafe69f8-630b-5486-ba76-590e0b4d1820', 'vg_name': 'ceph-dafe69f8-630b-5486-ba76-590e0b4d1820'}) 2025-05-23 00:42:54.628562 | orchestrator | 2025-05-23 00:42:54.629418 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-23 00:42:54.629744 | orchestrator | Friday 23 May 2025 00:42:54 +0000 (0:00:00.198) 0:01:12.963 ************ 2025-05-23 00:42:54.799471 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1c1d7620-81eb-54f7-8ffb-e9df7a8995e0', 'data_vg': 'ceph-1c1d7620-81eb-54f7-8ffb-e9df7a8995e0'})  2025-05-23 00:42:54.799907 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dafe69f8-630b-5486-ba76-590e0b4d1820', 'data_vg': 'ceph-dafe69f8-630b-5486-ba76-590e0b4d1820'})  2025-05-23 00:42:54.800977 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:54.802884 | orchestrator | 2025-05-23 00:42:54.803910 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-23 00:42:54.804052 | orchestrator | Friday 23 May 2025 00:42:54 +0000 (0:00:00.171) 0:01:13.135 ************ 2025-05-23 00:42:54.971141 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1c1d7620-81eb-54f7-8ffb-e9df7a8995e0', 'data_vg': 'ceph-1c1d7620-81eb-54f7-8ffb-e9df7a8995e0'})  2025-05-23 00:42:54.971990 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dafe69f8-630b-5486-ba76-590e0b4d1820', 'data_vg': 'ceph-dafe69f8-630b-5486-ba76-590e0b4d1820'})  2025-05-23 00:42:54.973480 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:54.974966 | orchestrator | 2025-05-23 00:42:54.976268 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-23 00:42:54.976863 | orchestrator | Friday 23 May 2025 00:42:54 +0000 (0:00:00.171) 0:01:13.307 ************ 2025-05-23 00:42:55.172905 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1c1d7620-81eb-54f7-8ffb-e9df7a8995e0', 'data_vg': 'ceph-1c1d7620-81eb-54f7-8ffb-e9df7a8995e0'})  2025-05-23 00:42:55.173871 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dafe69f8-630b-5486-ba76-590e0b4d1820', 'data_vg': 'ceph-dafe69f8-630b-5486-ba76-590e0b4d1820'})  2025-05-23 00:42:55.175268 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:42:55.176684 | orchestrator | 2025-05-23 00:42:55.176889 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-23 00:42:55.177973 | orchestrator | Friday 23 May 2025 00:42:55 +0000 (0:00:00.201) 0:01:13.509 ************ 2025-05-23 00:42:55.758681 | orchestrator | ok: [testbed-node-5] => { 2025-05-23 00:42:55.759953 | orchestrator |  "lvm_report": { 2025-05-23 00:42:55.762678 | orchestrator |  "lv": [ 2025-05-23 00:42:55.763686 | orchestrator |  { 2025-05-23 00:42:55.764839 | orchestrator |  "lv_name": "osd-block-1c1d7620-81eb-54f7-8ffb-e9df7a8995e0", 2025-05-23 00:42:55.765781 | orchestrator |  "vg_name": "ceph-1c1d7620-81eb-54f7-8ffb-e9df7a8995e0" 2025-05-23 00:42:55.766671 | orchestrator |  }, 2025-05-23 00:42:55.767524 | orchestrator |  { 2025-05-23 00:42:55.767817 | orchestrator |  "lv_name": "osd-block-dafe69f8-630b-5486-ba76-590e0b4d1820", 2025-05-23 00:42:55.768952 | orchestrator |  "vg_name": "ceph-dafe69f8-630b-5486-ba76-590e0b4d1820" 2025-05-23 00:42:55.769738 | orchestrator |  } 2025-05-23 00:42:55.770784 | orchestrator |  ], 2025-05-23 00:42:55.770952 | orchestrator |  "pv": [ 2025-05-23 00:42:55.771804 | orchestrator |  { 2025-05-23 00:42:55.772337 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-23 00:42:55.773025 | orchestrator |  "vg_name": "ceph-1c1d7620-81eb-54f7-8ffb-e9df7a8995e0" 2025-05-23 00:42:55.773811 | orchestrator |  }, 2025-05-23 00:42:55.774176 | orchestrator |  { 2025-05-23 00:42:55.775328 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-23 00:42:55.775359 | orchestrator |  "vg_name": "ceph-dafe69f8-630b-5486-ba76-590e0b4d1820" 2025-05-23 00:42:55.775938 | orchestrator |  } 2025-05-23 00:42:55.776483 | orchestrator |  ] 2025-05-23 00:42:55.777056 | orchestrator |  } 2025-05-23 00:42:55.778454 | orchestrator | } 2025-05-23 00:42:55.778480 | orchestrator | 2025-05-23 00:42:55.779895 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 00:42:55.779955 | orchestrator | 2025-05-23 00:42:55 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-23 00:42:55.779977 | orchestrator | 2025-05-23 00:42:55 | INFO  | Please wait and do not abort execution. 2025-05-23 00:42:55.780340 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-23 00:42:55.780849 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-23 00:42:55.781458 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-23 00:42:55.782109 | orchestrator | 2025-05-23 00:42:55.782724 | orchestrator | 2025-05-23 00:42:55.783209 | orchestrator | 2025-05-23 00:42:55.784276 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-23 00:42:55.784789 | orchestrator | Friday 23 May 2025 00:42:55 +0000 (0:00:00.586) 0:01:14.095 ************ 2025-05-23 00:42:55.786614 | orchestrator | =============================================================================== 2025-05-23 00:42:55.792722 | orchestrator | Create block VGs -------------------------------------------------------- 5.80s 2025-05-23 00:42:55.797661 | orchestrator | Create block LVs -------------------------------------------------------- 4.15s 2025-05-23 00:42:55.797778 | orchestrator | Print LVM report data --------------------------------------------------- 2.11s 2025-05-23 00:42:55.803005 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 2.10s 2025-05-23 00:42:55.803467 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.64s 2025-05-23 00:42:55.804098 | orchestrator | Add known partitions to the list of available block devices ------------- 1.62s 2025-05-23 00:42:55.804824 | orchestrator | Add known links to the list of available block devices ------------------ 1.53s 2025-05-23 00:42:55.806106 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.53s 2025-05-23 00:42:55.806365 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.52s 2025-05-23 00:42:55.806808 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.52s 2025-05-23 00:42:55.807387 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.04s 2025-05-23 00:42:55.807854 | orchestrator | Add known partitions to the list of available block devices ------------- 0.85s 2025-05-23 00:42:55.808424 | orchestrator | Add known partitions to the list of available block devices ------------- 0.72s 2025-05-23 00:42:55.808920 | orchestrator | Get initial list of available block devices ----------------------------- 0.71s 2025-05-23 00:42:55.810128 | orchestrator | Create WAL LVs for ceph_wal_devices ------------------------------------- 0.70s 2025-05-23 00:42:55.810620 | orchestrator | Print 'Create block LVs' ------------------------------------------------ 0.69s 2025-05-23 00:42:55.811232 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.68s 2025-05-23 00:42:55.811816 | orchestrator | Add known partitions to the list of available block devices ------------- 0.67s 2025-05-23 00:42:55.812391 | orchestrator | Create WAL VGs ---------------------------------------------------------- 0.67s 2025-05-23 00:42:55.812955 | orchestrator | Add known partitions to the list of available block devices ------------- 0.64s 2025-05-23 00:42:57.579041 | orchestrator | 2025-05-23 00:42:57 | INFO  | Task f221559a-8805-4897-87fb-76b63e3c93f8 (facts) was prepared for execution. 2025-05-23 00:42:57.579161 | orchestrator | 2025-05-23 00:42:57 | INFO  | It takes a moment until task f221559a-8805-4897-87fb-76b63e3c93f8 (facts) has been started and output is visible here. 2025-05-23 00:43:00.649493 | orchestrator | 2025-05-23 00:43:00.650871 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-05-23 00:43:00.654546 | orchestrator | 2025-05-23 00:43:00.655235 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-23 00:43:00.656053 | orchestrator | Friday 23 May 2025 00:43:00 +0000 (0:00:00.194) 0:00:00.194 ************ 2025-05-23 00:43:01.656923 | orchestrator | ok: [testbed-manager] 2025-05-23 00:43:01.657092 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:43:01.658243 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:43:01.659730 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:43:01.660459 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:43:01.661766 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:43:01.662753 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:43:01.663267 | orchestrator | 2025-05-23 00:43:01.664202 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-23 00:43:01.665900 | orchestrator | Friday 23 May 2025 00:43:01 +0000 (0:00:01.006) 0:00:01.200 ************ 2025-05-23 00:43:01.824070 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:43:01.903815 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:43:01.983213 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:43:02.080594 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:43:02.155546 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:43:02.876531 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:43:02.877297 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:43:02.880637 | orchestrator | 2025-05-23 00:43:02.881796 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-23 00:43:02.883288 | orchestrator | 2025-05-23 00:43:02.884541 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-23 00:43:02.885692 | orchestrator | Friday 23 May 2025 00:43:02 +0000 (0:00:01.220) 0:00:02.421 ************ 2025-05-23 00:43:08.275496 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:43:08.275614 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:43:08.278654 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:43:08.278753 | orchestrator | ok: [testbed-manager] 2025-05-23 00:43:08.279564 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:43:08.281045 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:43:08.281655 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:43:08.281989 | orchestrator | 2025-05-23 00:43:08.284665 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-23 00:43:08.285922 | orchestrator | 2025-05-23 00:43:08.286225 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-23 00:43:08.287161 | orchestrator | Friday 23 May 2025 00:43:08 +0000 (0:00:05.399) 0:00:07.820 ************ 2025-05-23 00:43:08.601543 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:43:08.692298 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:43:08.767111 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:43:08.869620 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:43:08.951519 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:43:08.985209 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:43:08.985521 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:43:08.986292 | orchestrator | 2025-05-23 00:43:08.987918 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 00:43:08.987961 | orchestrator | 2025-05-23 00:43:08 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-23 00:43:08.987977 | orchestrator | 2025-05-23 00:43:08 | INFO  | Please wait and do not abort execution. 2025-05-23 00:43:08.988273 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-23 00:43:08.988818 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-23 00:43:08.989419 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-23 00:43:08.989984 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-23 00:43:08.991097 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-23 00:43:08.991305 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-23 00:43:08.993289 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-23 00:43:08.993893 | orchestrator | 2025-05-23 00:43:08.994467 | orchestrator | Friday 23 May 2025 00:43:08 +0000 (0:00:00.712) 0:00:08.533 ************ 2025-05-23 00:43:08.994811 | orchestrator | =============================================================================== 2025-05-23 00:43:08.995238 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.40s 2025-05-23 00:43:08.995581 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.22s 2025-05-23 00:43:08.995982 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.01s 2025-05-23 00:43:08.996434 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.71s 2025-05-23 00:43:09.487843 | orchestrator | 2025-05-23 00:43:09.490123 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Fri May 23 00:43:09 UTC 2025 2025-05-23 00:43:09.490167 | orchestrator | 2025-05-23 00:43:10.878095 | orchestrator | 2025-05-23 00:43:10 | INFO  | Collection nutshell is prepared for execution 2025-05-23 00:43:10.878160 | orchestrator | 2025-05-23 00:43:10 | INFO  | D [0] - dotfiles 2025-05-23 00:43:10.881829 | orchestrator | 2025-05-23 00:43:10 | INFO  | D [0] - homer 2025-05-23 00:43:10.881844 | orchestrator | 2025-05-23 00:43:10 | INFO  | D [0] - netdata 2025-05-23 00:43:10.881849 | orchestrator | 2025-05-23 00:43:10 | INFO  | D [0] - openstackclient 2025-05-23 00:43:10.881853 | orchestrator | 2025-05-23 00:43:10 | INFO  | D [0] - phpmyadmin 2025-05-23 00:43:10.881857 | orchestrator | 2025-05-23 00:43:10 | INFO  | A [0] - common 2025-05-23 00:43:10.883208 | orchestrator | 2025-05-23 00:43:10 | INFO  | A [1] -- loadbalancer 2025-05-23 00:43:10.883219 | orchestrator | 2025-05-23 00:43:10 | INFO  | D [2] --- opensearch 2025-05-23 00:43:10.883223 | orchestrator | 2025-05-23 00:43:10 | INFO  | A [2] --- mariadb-ng 2025-05-23 00:43:10.883304 | orchestrator | 2025-05-23 00:43:10 | INFO  | D [3] ---- horizon 2025-05-23 00:43:10.883312 | orchestrator | 2025-05-23 00:43:10 | INFO  | A [3] ---- keystone 2025-05-23 00:43:10.883316 | orchestrator | 2025-05-23 00:43:10 | INFO  | A [4] ----- neutron 2025-05-23 00:43:10.883321 | orchestrator | 2025-05-23 00:43:10 | INFO  | D [5] ------ wait-for-nova 2025-05-23 00:43:10.883490 | orchestrator | 2025-05-23 00:43:10 | INFO  | A [5] ------ octavia 2025-05-23 00:43:10.883771 | orchestrator | 2025-05-23 00:43:10 | INFO  | D [4] ----- barbican 2025-05-23 00:43:10.883779 | orchestrator | 2025-05-23 00:43:10 | INFO  | D [4] ----- designate 2025-05-23 00:43:10.884093 | orchestrator | 2025-05-23 00:43:10 | INFO  | D [4] ----- ironic 2025-05-23 00:43:10.884103 | orchestrator | 2025-05-23 00:43:10 | INFO  | D [4] ----- placement 2025-05-23 00:43:10.884107 | orchestrator | 2025-05-23 00:43:10 | INFO  | D [4] ----- magnum 2025-05-23 00:43:10.884259 | orchestrator | 2025-05-23 00:43:10 | INFO  | A [1] -- openvswitch 2025-05-23 00:43:10.884266 | orchestrator | 2025-05-23 00:43:10 | INFO  | D [2] --- ovn 2025-05-23 00:43:10.884590 | orchestrator | 2025-05-23 00:43:10 | INFO  | D [1] -- memcached 2025-05-23 00:43:10.884597 | orchestrator | 2025-05-23 00:43:10 | INFO  | D [1] -- redis 2025-05-23 00:43:10.884771 | orchestrator | 2025-05-23 00:43:10 | INFO  | D [1] -- rabbitmq-ng 2025-05-23 00:43:10.884778 | orchestrator | 2025-05-23 00:43:10 | INFO  | A [0] - kubernetes 2025-05-23 00:43:10.884783 | orchestrator | 2025-05-23 00:43:10 | INFO  | D [1] -- kubeconfig 2025-05-23 00:43:10.884787 | orchestrator | 2025-05-23 00:43:10 | INFO  | A [1] -- copy-kubeconfig 2025-05-23 00:43:10.884945 | orchestrator | 2025-05-23 00:43:10 | INFO  | A [0] - ceph 2025-05-23 00:43:10.886012 | orchestrator | 2025-05-23 00:43:10 | INFO  | A [1] -- ceph-pools 2025-05-23 00:43:10.886044 | orchestrator | 2025-05-23 00:43:10 | INFO  | A [2] --- copy-ceph-keys 2025-05-23 00:43:10.886161 | orchestrator | 2025-05-23 00:43:10 | INFO  | A [3] ---- cephclient 2025-05-23 00:43:10.886169 | orchestrator | 2025-05-23 00:43:10 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-05-23 00:43:10.886174 | orchestrator | 2025-05-23 00:43:10 | INFO  | A [4] ----- wait-for-keystone 2025-05-23 00:43:10.886305 | orchestrator | 2025-05-23 00:43:10 | INFO  | D [5] ------ kolla-ceph-rgw 2025-05-23 00:43:10.886334 | orchestrator | 2025-05-23 00:43:10 | INFO  | D [5] ------ glance 2025-05-23 00:43:10.886339 | orchestrator | 2025-05-23 00:43:10 | INFO  | D [5] ------ cinder 2025-05-23 00:43:10.886406 | orchestrator | 2025-05-23 00:43:10 | INFO  | D [5] ------ nova 2025-05-23 00:43:10.886575 | orchestrator | 2025-05-23 00:43:10 | INFO  | A [4] ----- prometheus 2025-05-23 00:43:10.886583 | orchestrator | 2025-05-23 00:43:10 | INFO  | D [5] ------ grafana 2025-05-23 00:43:11.009199 | orchestrator | 2025-05-23 00:43:11 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-05-23 00:43:11.009294 | orchestrator | 2025-05-23 00:43:11 | INFO  | Tasks are running in the background 2025-05-23 00:43:12.841803 | orchestrator | 2025-05-23 00:43:12 | INFO  | No task IDs specified, wait for all currently running tasks 2025-05-23 00:43:14.926251 | orchestrator | 2025-05-23 00:43:14 | INFO  | Task dd48519b-f248-481d-8941-c3c8b5e97568 is in state STARTED 2025-05-23 00:43:14.926463 | orchestrator | 2025-05-23 00:43:14 | INFO  | Task cb845b00-9970-4c6d-9168-6511a0efdabd is in state STARTED 2025-05-23 00:43:14.928632 | orchestrator | 2025-05-23 00:43:14 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:43:14.928870 | orchestrator | 2025-05-23 00:43:14 | INFO  | Task 8d70d34a-53ce-491a-aa34-0bfb7001043c is in state STARTED 2025-05-23 00:43:14.929434 | orchestrator | 2025-05-23 00:43:14 | INFO  | Task 6d9badb4-f69f-4dbf-8aa4-18ab6206723e is in state STARTED 2025-05-23 00:43:14.929807 | orchestrator | 2025-05-23 00:43:14 | INFO  | Task 5ebf1b3b-37d1-461e-928c-7e31817beccc is in state STARTED 2025-05-23 00:43:14.929829 | orchestrator | 2025-05-23 00:43:14 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:43:17.974596 | orchestrator | 2025-05-23 00:43:17 | INFO  | Task dd48519b-f248-481d-8941-c3c8b5e97568 is in state STARTED 2025-05-23 00:43:17.975671 | orchestrator | 2025-05-23 00:43:17 | INFO  | Task cb845b00-9970-4c6d-9168-6511a0efdabd is in state STARTED 2025-05-23 00:43:17.978357 | orchestrator | 2025-05-23 00:43:17 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:43:17.979749 | orchestrator | 2025-05-23 00:43:17 | INFO  | Task 8d70d34a-53ce-491a-aa34-0bfb7001043c is in state STARTED 2025-05-23 00:43:17.981879 | orchestrator | 2025-05-23 00:43:17 | INFO  | Task 6d9badb4-f69f-4dbf-8aa4-18ab6206723e is in state STARTED 2025-05-23 00:43:17.981903 | orchestrator | 2025-05-23 00:43:17 | INFO  | Task 5ebf1b3b-37d1-461e-928c-7e31817beccc is in state STARTED 2025-05-23 00:43:17.981915 | orchestrator | 2025-05-23 00:43:17 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:43:21.019190 | orchestrator | 2025-05-23 00:43:21 | INFO  | Task dd48519b-f248-481d-8941-c3c8b5e97568 is in state STARTED 2025-05-23 00:43:21.022408 | orchestrator | 2025-05-23 00:43:21 | INFO  | Task cb845b00-9970-4c6d-9168-6511a0efdabd is in state STARTED 2025-05-23 00:43:21.022622 | orchestrator | 2025-05-23 00:43:21 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:43:21.025752 | orchestrator | 2025-05-23 00:43:21 | INFO  | Task 8d70d34a-53ce-491a-aa34-0bfb7001043c is in state STARTED 2025-05-23 00:43:21.026539 | orchestrator | 2025-05-23 00:43:21 | INFO  | Task 6d9badb4-f69f-4dbf-8aa4-18ab6206723e is in state STARTED 2025-05-23 00:43:21.028336 | orchestrator | 2025-05-23 00:43:21 | INFO  | Task 5ebf1b3b-37d1-461e-928c-7e31817beccc is in state STARTED 2025-05-23 00:43:21.028360 | orchestrator | 2025-05-23 00:43:21 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:43:24.103094 | orchestrator | 2025-05-23 00:43:24 | INFO  | Task dd48519b-f248-481d-8941-c3c8b5e97568 is in state STARTED 2025-05-23 00:43:24.104138 | orchestrator | 2025-05-23 00:43:24 | INFO  | Task cb845b00-9970-4c6d-9168-6511a0efdabd is in state STARTED 2025-05-23 00:43:24.104935 | orchestrator | 2025-05-23 00:43:24 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:43:24.106918 | orchestrator | 2025-05-23 00:43:24 | INFO  | Task 8d70d34a-53ce-491a-aa34-0bfb7001043c is in state STARTED 2025-05-23 00:43:24.108127 | orchestrator | 2025-05-23 00:43:24 | INFO  | Task 6d9badb4-f69f-4dbf-8aa4-18ab6206723e is in state STARTED 2025-05-23 00:43:24.108772 | orchestrator | 2025-05-23 00:43:24 | INFO  | Task 5ebf1b3b-37d1-461e-928c-7e31817beccc is in state STARTED 2025-05-23 00:43:24.108804 | orchestrator | 2025-05-23 00:43:24 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:43:27.168273 | orchestrator | 2025-05-23 00:43:27 | INFO  | Task dd48519b-f248-481d-8941-c3c8b5e97568 is in state STARTED 2025-05-23 00:43:27.169902 | orchestrator | 2025-05-23 00:43:27 | INFO  | Task cb845b00-9970-4c6d-9168-6511a0efdabd is in state STARTED 2025-05-23 00:43:27.169982 | orchestrator | 2025-05-23 00:43:27 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:43:27.171160 | orchestrator | 2025-05-23 00:43:27 | INFO  | Task 8d70d34a-53ce-491a-aa34-0bfb7001043c is in state STARTED 2025-05-23 00:43:27.173547 | orchestrator | 2025-05-23 00:43:27 | INFO  | Task 6d9badb4-f69f-4dbf-8aa4-18ab6206723e is in state STARTED 2025-05-23 00:43:27.174711 | orchestrator | 2025-05-23 00:43:27 | INFO  | Task 5ebf1b3b-37d1-461e-928c-7e31817beccc is in state STARTED 2025-05-23 00:43:27.174791 | orchestrator | 2025-05-23 00:43:27 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:43:30.242446 | orchestrator | 2025-05-23 00:43:30 | INFO  | Task dd48519b-f248-481d-8941-c3c8b5e97568 is in state STARTED 2025-05-23 00:43:30.243284 | orchestrator | 2025-05-23 00:43:30 | INFO  | Task cb845b00-9970-4c6d-9168-6511a0efdabd is in state STARTED 2025-05-23 00:43:30.245997 | orchestrator | 2025-05-23 00:43:30 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:43:30.252140 | orchestrator | 2025-05-23 00:43:30.252181 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-05-23 00:43:30.252194 | orchestrator | 2025-05-23 00:43:30.252205 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-05-23 00:43:30.252217 | orchestrator | Friday 23 May 2025 00:43:17 +0000 (0:00:00.521) 0:00:00.521 ************ 2025-05-23 00:43:30.252229 | orchestrator | changed: [testbed-manager] 2025-05-23 00:43:30.252241 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:43:30.252252 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:43:30.252263 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:43:30.252273 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:43:30.252284 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:43:30.252295 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:43:30.252305 | orchestrator | 2025-05-23 00:43:30.252316 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-05-23 00:43:30.252327 | orchestrator | Friday 23 May 2025 00:43:21 +0000 (0:00:03.621) 0:00:04.143 ************ 2025-05-23 00:43:30.252338 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-05-23 00:43:30.252349 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-05-23 00:43:30.252360 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-05-23 00:43:30.252371 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-05-23 00:43:30.252382 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-05-23 00:43:30.252392 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-05-23 00:43:30.252403 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-05-23 00:43:30.252431 | orchestrator | 2025-05-23 00:43:30.252443 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-05-23 00:43:30.252453 | orchestrator | Friday 23 May 2025 00:43:23 +0000 (0:00:02.134) 0:00:06.278 ************ 2025-05-23 00:43:30.252474 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-23 00:43:21.382451', 'end': '2025-05-23 00:43:21.388466', 'delta': '0:00:00.006015', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-23 00:43:30.252490 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-23 00:43:22.100446', 'end': '2025-05-23 00:43:22.108983', 'delta': '0:00:00.008537', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-23 00:43:30.252507 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-23 00:43:21.882026', 'end': '2025-05-23 00:43:21.888631', 'delta': '0:00:00.006605', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-23 00:43:30.252539 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-23 00:43:21.628953', 'end': '2025-05-23 00:43:21.637352', 'delta': '0:00:00.008399', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-23 00:43:30.252552 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-23 00:43:22.617403', 'end': '2025-05-23 00:43:22.625330', 'delta': '0:00:00.007927', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-23 00:43:30.252570 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-23 00:43:22.801498', 'end': '2025-05-23 00:43:22.810026', 'delta': '0:00:00.008528', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-23 00:43:30.252581 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-23 00:43:22.893566', 'end': '2025-05-23 00:43:22.902060', 'delta': '0:00:00.008494', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-23 00:43:30.252593 | orchestrator | 2025-05-23 00:43:30.252607 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-05-23 00:43:30.252619 | orchestrator | Friday 23 May 2025 00:43:25 +0000 (0:00:02.075) 0:00:08.353 ************ 2025-05-23 00:43:30.252630 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-05-23 00:43:30.252641 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-05-23 00:43:30.252651 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-05-23 00:43:30.252662 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-05-23 00:43:30.252673 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-05-23 00:43:30.252684 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-05-23 00:43:30.252695 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-05-23 00:43:30.252706 | orchestrator | 2025-05-23 00:43:30.252741 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 00:43:30.252754 | orchestrator | testbed-manager : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 00:43:30.252769 | orchestrator | testbed-node-0 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 00:43:30.252782 | orchestrator | testbed-node-1 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 00:43:30.252802 | orchestrator | testbed-node-2 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 00:43:30.252817 | orchestrator | testbed-node-3 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 00:43:30.252830 | orchestrator | testbed-node-4 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 00:43:30.252849 | orchestrator | testbed-node-5 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 00:43:30.252862 | orchestrator | 2025-05-23 00:43:30.252875 | orchestrator | Friday 23 May 2025 00:43:27 +0000 (0:00:01.869) 0:00:10.222 ************ 2025-05-23 00:43:30.252888 | orchestrator | =============================================================================== 2025-05-23 00:43:30.252901 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.62s 2025-05-23 00:43:30.252914 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.13s 2025-05-23 00:43:30.252926 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.08s 2025-05-23 00:43:30.252939 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 1.87s 2025-05-23 00:43:30.252976 | orchestrator | 2025-05-23 00:43:30 | INFO  | Task 8d70d34a-53ce-491a-aa34-0bfb7001043c is in state STARTED 2025-05-23 00:43:30.252990 | orchestrator | 2025-05-23 00:43:30 | INFO  | Task 6d9badb4-f69f-4dbf-8aa4-18ab6206723e is in state SUCCESS 2025-05-23 00:43:30.253400 | orchestrator | 2025-05-23 00:43:30 | INFO  | Task 5ebf1b3b-37d1-461e-928c-7e31817beccc is in state STARTED 2025-05-23 00:43:30.256485 | orchestrator | 2025-05-23 00:43:30 | INFO  | Task 063d3bed-e92f-4371-835c-564bf2705ded is in state STARTED 2025-05-23 00:43:30.257825 | orchestrator | 2025-05-23 00:43:30 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:43:33.290260 | orchestrator | 2025-05-23 00:43:33 | INFO  | Task dd48519b-f248-481d-8941-c3c8b5e97568 is in state STARTED 2025-05-23 00:43:33.290704 | orchestrator | 2025-05-23 00:43:33 | INFO  | Task cb845b00-9970-4c6d-9168-6511a0efdabd is in state STARTED 2025-05-23 00:43:33.300388 | orchestrator | 2025-05-23 00:43:33 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:43:33.300424 | orchestrator | 2025-05-23 00:43:33 | INFO  | Task 8d70d34a-53ce-491a-aa34-0bfb7001043c is in state STARTED 2025-05-23 00:43:33.300436 | orchestrator | 2025-05-23 00:43:33 | INFO  | Task 5ebf1b3b-37d1-461e-928c-7e31817beccc is in state STARTED 2025-05-23 00:43:33.300447 | orchestrator | 2025-05-23 00:43:33 | INFO  | Task 063d3bed-e92f-4371-835c-564bf2705ded is in state STARTED 2025-05-23 00:43:33.300459 | orchestrator | 2025-05-23 00:43:33 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:43:36.351260 | orchestrator | 2025-05-23 00:43:36 | INFO  | Task dd48519b-f248-481d-8941-c3c8b5e97568 is in state STARTED 2025-05-23 00:43:36.351655 | orchestrator | 2025-05-23 00:43:36 | INFO  | Task cb845b00-9970-4c6d-9168-6511a0efdabd is in state STARTED 2025-05-23 00:43:36.352242 | orchestrator | 2025-05-23 00:43:36 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:43:36.354318 | orchestrator | 2025-05-23 00:43:36 | INFO  | Task 8d70d34a-53ce-491a-aa34-0bfb7001043c is in state STARTED 2025-05-23 00:43:36.354717 | orchestrator | 2025-05-23 00:43:36 | INFO  | Task 5ebf1b3b-37d1-461e-928c-7e31817beccc is in state STARTED 2025-05-23 00:43:36.356287 | orchestrator | 2025-05-23 00:43:36 | INFO  | Task 063d3bed-e92f-4371-835c-564bf2705ded is in state STARTED 2025-05-23 00:43:36.356326 | orchestrator | 2025-05-23 00:43:36 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:43:39.409256 | orchestrator | 2025-05-23 00:43:39 | INFO  | Task dd48519b-f248-481d-8941-c3c8b5e97568 is in state STARTED 2025-05-23 00:43:39.409351 | orchestrator | 2025-05-23 00:43:39 | INFO  | Task cb845b00-9970-4c6d-9168-6511a0efdabd is in state STARTED 2025-05-23 00:43:39.409391 | orchestrator | 2025-05-23 00:43:39 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:43:39.409426 | orchestrator | 2025-05-23 00:43:39 | INFO  | Task 8d70d34a-53ce-491a-aa34-0bfb7001043c is in state STARTED 2025-05-23 00:43:39.413758 | orchestrator | 2025-05-23 00:43:39 | INFO  | Task 5ebf1b3b-37d1-461e-928c-7e31817beccc is in state STARTED 2025-05-23 00:43:39.413806 | orchestrator | 2025-05-23 00:43:39 | INFO  | Task 063d3bed-e92f-4371-835c-564bf2705ded is in state STARTED 2025-05-23 00:43:39.413818 | orchestrator | 2025-05-23 00:43:39 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:43:42.474231 | orchestrator | 2025-05-23 00:43:42 | INFO  | Task dd48519b-f248-481d-8941-c3c8b5e97568 is in state STARTED 2025-05-23 00:43:42.474341 | orchestrator | 2025-05-23 00:43:42 | INFO  | Task cb845b00-9970-4c6d-9168-6511a0efdabd is in state STARTED 2025-05-23 00:43:42.474356 | orchestrator | 2025-05-23 00:43:42 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:43:42.474369 | orchestrator | 2025-05-23 00:43:42 | INFO  | Task 8d70d34a-53ce-491a-aa34-0bfb7001043c is in state STARTED 2025-05-23 00:43:42.485539 | orchestrator | 2025-05-23 00:43:42 | INFO  | Task 5ebf1b3b-37d1-461e-928c-7e31817beccc is in state STARTED 2025-05-23 00:43:42.499138 | orchestrator | 2025-05-23 00:43:42 | INFO  | Task 063d3bed-e92f-4371-835c-564bf2705ded is in state STARTED 2025-05-23 00:43:42.499210 | orchestrator | 2025-05-23 00:43:42 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:43:45.519947 | orchestrator | 2025-05-23 00:43:45 | INFO  | Task dd48519b-f248-481d-8941-c3c8b5e97568 is in state STARTED 2025-05-23 00:43:45.520110 | orchestrator | 2025-05-23 00:43:45 | INFO  | Task cb845b00-9970-4c6d-9168-6511a0efdabd is in state STARTED 2025-05-23 00:43:45.520975 | orchestrator | 2025-05-23 00:43:45 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:43:45.522187 | orchestrator | 2025-05-23 00:43:45 | INFO  | Task 8d70d34a-53ce-491a-aa34-0bfb7001043c is in state STARTED 2025-05-23 00:43:45.523346 | orchestrator | 2025-05-23 00:43:45 | INFO  | Task 5ebf1b3b-37d1-461e-928c-7e31817beccc is in state STARTED 2025-05-23 00:43:45.524941 | orchestrator | 2025-05-23 00:43:45 | INFO  | Task 063d3bed-e92f-4371-835c-564bf2705ded is in state STARTED 2025-05-23 00:43:45.524963 | orchestrator | 2025-05-23 00:43:45 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:43:48.574194 | orchestrator | 2025-05-23 00:43:48 | INFO  | Task dd48519b-f248-481d-8941-c3c8b5e97568 is in state STARTED 2025-05-23 00:43:48.576368 | orchestrator | 2025-05-23 00:43:48 | INFO  | Task cb845b00-9970-4c6d-9168-6511a0efdabd is in state STARTED 2025-05-23 00:43:48.577922 | orchestrator | 2025-05-23 00:43:48 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:43:48.581108 | orchestrator | 2025-05-23 00:43:48 | INFO  | Task 8d70d34a-53ce-491a-aa34-0bfb7001043c is in state STARTED 2025-05-23 00:43:48.586545 | orchestrator | 2025-05-23 00:43:48 | INFO  | Task 5ebf1b3b-37d1-461e-928c-7e31817beccc is in state STARTED 2025-05-23 00:43:48.589634 | orchestrator | 2025-05-23 00:43:48 | INFO  | Task 063d3bed-e92f-4371-835c-564bf2705ded is in state STARTED 2025-05-23 00:43:48.589676 | orchestrator | 2025-05-23 00:43:48 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:43:51.654658 | orchestrator | 2025-05-23 00:43:51 | INFO  | Task dd48519b-f248-481d-8941-c3c8b5e97568 is in state STARTED 2025-05-23 00:43:51.655869 | orchestrator | 2025-05-23 00:43:51 | INFO  | Task cb845b00-9970-4c6d-9168-6511a0efdabd is in state SUCCESS 2025-05-23 00:43:51.659203 | orchestrator | 2025-05-23 00:43:51 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:43:51.660578 | orchestrator | 2025-05-23 00:43:51 | INFO  | Task 8d70d34a-53ce-491a-aa34-0bfb7001043c is in state STARTED 2025-05-23 00:43:51.663801 | orchestrator | 2025-05-23 00:43:51 | INFO  | Task 5ebf1b3b-37d1-461e-928c-7e31817beccc is in state STARTED 2025-05-23 00:43:51.665323 | orchestrator | 2025-05-23 00:43:51 | INFO  | Task 063d3bed-e92f-4371-835c-564bf2705ded is in state STARTED 2025-05-23 00:43:51.665353 | orchestrator | 2025-05-23 00:43:51 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:43:54.726595 | orchestrator | 2025-05-23 00:43:54 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:43:54.728799 | orchestrator | 2025-05-23 00:43:54 | INFO  | Task dd48519b-f248-481d-8941-c3c8b5e97568 is in state STARTED 2025-05-23 00:43:54.732871 | orchestrator | 2025-05-23 00:43:54 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:43:54.733248 | orchestrator | 2025-05-23 00:43:54 | INFO  | Task 8d70d34a-53ce-491a-aa34-0bfb7001043c is in state STARTED 2025-05-23 00:43:54.735527 | orchestrator | 2025-05-23 00:43:54 | INFO  | Task 5ebf1b3b-37d1-461e-928c-7e31817beccc is in state STARTED 2025-05-23 00:43:54.738103 | orchestrator | 2025-05-23 00:43:54 | INFO  | Task 063d3bed-e92f-4371-835c-564bf2705ded is in state STARTED 2025-05-23 00:43:54.738156 | orchestrator | 2025-05-23 00:43:54 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:43:57.821385 | orchestrator | 2025-05-23 00:43:57 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:43:57.821479 | orchestrator | 2025-05-23 00:43:57 | INFO  | Task dd48519b-f248-481d-8941-c3c8b5e97568 is in state STARTED 2025-05-23 00:43:57.821495 | orchestrator | 2025-05-23 00:43:57 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:43:57.821507 | orchestrator | 2025-05-23 00:43:57 | INFO  | Task 8d70d34a-53ce-491a-aa34-0bfb7001043c is in state STARTED 2025-05-23 00:43:57.821518 | orchestrator | 2025-05-23 00:43:57 | INFO  | Task 5ebf1b3b-37d1-461e-928c-7e31817beccc is in state STARTED 2025-05-23 00:43:57.821529 | orchestrator | 2025-05-23 00:43:57 | INFO  | Task 063d3bed-e92f-4371-835c-564bf2705ded is in state STARTED 2025-05-23 00:43:57.821540 | orchestrator | 2025-05-23 00:43:57 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:44:00.865250 | orchestrator | 2025-05-23 00:44:00 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:44:00.865957 | orchestrator | 2025-05-23 00:44:00 | INFO  | Task dd48519b-f248-481d-8941-c3c8b5e97568 is in state STARTED 2025-05-23 00:44:00.866924 | orchestrator | 2025-05-23 00:44:00 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:44:00.866974 | orchestrator | 2025-05-23 00:44:00 | INFO  | Task 8d70d34a-53ce-491a-aa34-0bfb7001043c is in state STARTED 2025-05-23 00:44:00.868838 | orchestrator | 2025-05-23 00:44:00 | INFO  | Task 5ebf1b3b-37d1-461e-928c-7e31817beccc is in state STARTED 2025-05-23 00:44:00.876519 | orchestrator | 2025-05-23 00:44:00 | INFO  | Task 063d3bed-e92f-4371-835c-564bf2705ded is in state STARTED 2025-05-23 00:44:00.876562 | orchestrator | 2025-05-23 00:44:00 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:44:03.925667 | orchestrator | 2025-05-23 00:44:03 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:44:03.927729 | orchestrator | 2025-05-23 00:44:03 | INFO  | Task dd48519b-f248-481d-8941-c3c8b5e97568 is in state STARTED 2025-05-23 00:44:03.928016 | orchestrator | 2025-05-23 00:44:03 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:44:03.928777 | orchestrator | 2025-05-23 00:44:03 | INFO  | Task 8d70d34a-53ce-491a-aa34-0bfb7001043c is in state STARTED 2025-05-23 00:44:03.930151 | orchestrator | 2025-05-23 00:44:03 | INFO  | Task 5ebf1b3b-37d1-461e-928c-7e31817beccc is in state STARTED 2025-05-23 00:44:03.935591 | orchestrator | 2025-05-23 00:44:03 | INFO  | Task 063d3bed-e92f-4371-835c-564bf2705ded is in state STARTED 2025-05-23 00:44:03.935643 | orchestrator | 2025-05-23 00:44:03 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:44:06.979434 | orchestrator | 2025-05-23 00:44:06 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:44:06.979523 | orchestrator | 2025-05-23 00:44:06 | INFO  | Task dd48519b-f248-481d-8941-c3c8b5e97568 is in state STARTED 2025-05-23 00:44:06.980061 | orchestrator | 2025-05-23 00:44:06 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:44:06.981632 | orchestrator | 2025-05-23 00:44:06 | INFO  | Task 8d70d34a-53ce-491a-aa34-0bfb7001043c is in state STARTED 2025-05-23 00:44:06.984989 | orchestrator | 2025-05-23 00:44:06 | INFO  | Task 5ebf1b3b-37d1-461e-928c-7e31817beccc is in state STARTED 2025-05-23 00:44:06.988401 | orchestrator | 2025-05-23 00:44:06 | INFO  | Task 063d3bed-e92f-4371-835c-564bf2705ded is in state STARTED 2025-05-23 00:44:06.988452 | orchestrator | 2025-05-23 00:44:06 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:44:10.019693 | orchestrator | 2025-05-23 00:44:10 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:44:10.019984 | orchestrator | 2025-05-23 00:44:10 | INFO  | Task dd48519b-f248-481d-8941-c3c8b5e97568 is in state STARTED 2025-05-23 00:44:10.020063 | orchestrator | 2025-05-23 00:44:10 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:44:10.020934 | orchestrator | 2025-05-23 00:44:10 | INFO  | Task 8d70d34a-53ce-491a-aa34-0bfb7001043c is in state STARTED 2025-05-23 00:44:10.021250 | orchestrator | 2025-05-23 00:44:10 | INFO  | Task 5ebf1b3b-37d1-461e-928c-7e31817beccc is in state SUCCESS 2025-05-23 00:44:10.021664 | orchestrator | 2025-05-23 00:44:10 | INFO  | Task 063d3bed-e92f-4371-835c-564bf2705ded is in state STARTED 2025-05-23 00:44:10.021828 | orchestrator | 2025-05-23 00:44:10 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:44:13.054270 | orchestrator | 2025-05-23 00:44:13 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:44:13.054360 | orchestrator | 2025-05-23 00:44:13 | INFO  | Task dd48519b-f248-481d-8941-c3c8b5e97568 is in state STARTED 2025-05-23 00:44:13.060970 | orchestrator | 2025-05-23 00:44:13 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:44:13.061330 | orchestrator | 2025-05-23 00:44:13 | INFO  | Task 8d70d34a-53ce-491a-aa34-0bfb7001043c is in state STARTED 2025-05-23 00:44:13.061982 | orchestrator | 2025-05-23 00:44:13 | INFO  | Task 063d3bed-e92f-4371-835c-564bf2705ded is in state STARTED 2025-05-23 00:44:13.062090 | orchestrator | 2025-05-23 00:44:13 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:44:16.131214 | orchestrator | 2025-05-23 00:44:16 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:44:16.132495 | orchestrator | 2025-05-23 00:44:16 | INFO  | Task dd48519b-f248-481d-8941-c3c8b5e97568 is in state STARTED 2025-05-23 00:44:16.132907 | orchestrator | 2025-05-23 00:44:16 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:44:16.133637 | orchestrator | 2025-05-23 00:44:16 | INFO  | Task 8d70d34a-53ce-491a-aa34-0bfb7001043c is in state STARTED 2025-05-23 00:44:16.134437 | orchestrator | 2025-05-23 00:44:16 | INFO  | Task 063d3bed-e92f-4371-835c-564bf2705ded is in state STARTED 2025-05-23 00:44:16.134464 | orchestrator | 2025-05-23 00:44:16 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:44:19.186214 | orchestrator | 2025-05-23 00:44:19 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:44:19.186299 | orchestrator | 2025-05-23 00:44:19 | INFO  | Task dd48519b-f248-481d-8941-c3c8b5e97568 is in state STARTED 2025-05-23 00:44:19.186323 | orchestrator | 2025-05-23 00:44:19 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:44:19.187198 | orchestrator | 2025-05-23 00:44:19 | INFO  | Task 8d70d34a-53ce-491a-aa34-0bfb7001043c is in state STARTED 2025-05-23 00:44:19.187929 | orchestrator | 2025-05-23 00:44:19 | INFO  | Task 063d3bed-e92f-4371-835c-564bf2705ded is in state STARTED 2025-05-23 00:44:19.188176 | orchestrator | 2025-05-23 00:44:19 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:44:22.239383 | orchestrator | 2025-05-23 00:44:22 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:44:22.239479 | orchestrator | 2025-05-23 00:44:22 | INFO  | Task dd48519b-f248-481d-8941-c3c8b5e97568 is in state STARTED 2025-05-23 00:44:22.239495 | orchestrator | 2025-05-23 00:44:22 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:44:22.239507 | orchestrator | 2025-05-23 00:44:22 | INFO  | Task 8d70d34a-53ce-491a-aa34-0bfb7001043c is in state STARTED 2025-05-23 00:44:22.240831 | orchestrator | 2025-05-23 00:44:22 | INFO  | Task 063d3bed-e92f-4371-835c-564bf2705ded is in state STARTED 2025-05-23 00:44:22.240860 | orchestrator | 2025-05-23 00:44:22 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:44:25.295085 | orchestrator | 2025-05-23 00:44:25.295192 | orchestrator | 2025-05-23 00:44:25.295208 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-05-23 00:44:25.295220 | orchestrator | 2025-05-23 00:44:25.295231 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-05-23 00:44:25.295243 | orchestrator | Friday 23 May 2025 00:43:18 +0000 (0:00:00.287) 0:00:00.287 ************ 2025-05-23 00:44:25.295255 | orchestrator | ok: [testbed-manager] => { 2025-05-23 00:44:25.295267 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-05-23 00:44:25.295280 | orchestrator | } 2025-05-23 00:44:25.295291 | orchestrator | 2025-05-23 00:44:25.295302 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-05-23 00:44:25.295313 | orchestrator | Friday 23 May 2025 00:43:18 +0000 (0:00:00.217) 0:00:00.505 ************ 2025-05-23 00:44:25.295323 | orchestrator | ok: [testbed-manager] 2025-05-23 00:44:25.295335 | orchestrator | 2025-05-23 00:44:25.295352 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-05-23 00:44:25.295363 | orchestrator | Friday 23 May 2025 00:43:19 +0000 (0:00:00.956) 0:00:01.462 ************ 2025-05-23 00:44:25.295373 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-05-23 00:44:25.295384 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-05-23 00:44:25.295395 | orchestrator | 2025-05-23 00:44:25.295406 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-05-23 00:44:25.295417 | orchestrator | Friday 23 May 2025 00:43:20 +0000 (0:00:00.747) 0:00:02.210 ************ 2025-05-23 00:44:25.295427 | orchestrator | changed: [testbed-manager] 2025-05-23 00:44:25.295438 | orchestrator | 2025-05-23 00:44:25.295448 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-05-23 00:44:25.295459 | orchestrator | Friday 23 May 2025 00:43:22 +0000 (0:00:02.615) 0:00:04.825 ************ 2025-05-23 00:44:25.295489 | orchestrator | changed: [testbed-manager] 2025-05-23 00:44:25.295501 | orchestrator | 2025-05-23 00:44:25.295511 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-05-23 00:44:25.295522 | orchestrator | Friday 23 May 2025 00:43:24 +0000 (0:00:01.295) 0:00:06.120 ************ 2025-05-23 00:44:25.295532 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-05-23 00:44:25.295543 | orchestrator | ok: [testbed-manager] 2025-05-23 00:44:25.295553 | orchestrator | 2025-05-23 00:44:25.295564 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-05-23 00:44:25.295574 | orchestrator | Friday 23 May 2025 00:43:48 +0000 (0:00:24.244) 0:00:30.364 ************ 2025-05-23 00:44:25.295585 | orchestrator | changed: [testbed-manager] 2025-05-23 00:44:25.295595 | orchestrator | 2025-05-23 00:44:25.295605 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 00:44:25.295616 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 00:44:25.295628 | orchestrator | 2025-05-23 00:44:25.295641 | orchestrator | Friday 23 May 2025 00:43:50 +0000 (0:00:02.404) 0:00:32.769 ************ 2025-05-23 00:44:25.295652 | orchestrator | =============================================================================== 2025-05-23 00:44:25.295665 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 24.24s 2025-05-23 00:44:25.295677 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.61s 2025-05-23 00:44:25.295689 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.41s 2025-05-23 00:44:25.295701 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.30s 2025-05-23 00:44:25.295713 | orchestrator | osism.services.homer : Create traefik external network ------------------ 0.96s 2025-05-23 00:44:25.295725 | orchestrator | osism.services.homer : Create required directories ---------------------- 0.75s 2025-05-23 00:44:25.295737 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.22s 2025-05-23 00:44:25.295749 | orchestrator | 2025-05-23 00:44:25.295793 | orchestrator | 2025-05-23 00:44:25.295812 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-05-23 00:44:25.295830 | orchestrator | 2025-05-23 00:44:25.295846 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-05-23 00:44:25.295864 | orchestrator | Friday 23 May 2025 00:43:17 +0000 (0:00:00.289) 0:00:00.289 ************ 2025-05-23 00:44:25.295882 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-05-23 00:44:25.295900 | orchestrator | 2025-05-23 00:44:25.295917 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-05-23 00:44:25.295935 | orchestrator | Friday 23 May 2025 00:43:18 +0000 (0:00:00.440) 0:00:00.730 ************ 2025-05-23 00:44:25.295951 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-05-23 00:44:25.295968 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-05-23 00:44:25.295985 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-05-23 00:44:25.296000 | orchestrator | 2025-05-23 00:44:25.296016 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-05-23 00:44:25.296031 | orchestrator | Friday 23 May 2025 00:43:19 +0000 (0:00:01.264) 0:00:01.994 ************ 2025-05-23 00:44:25.296046 | orchestrator | changed: [testbed-manager] 2025-05-23 00:44:25.296062 | orchestrator | 2025-05-23 00:44:25.296077 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-05-23 00:44:25.296093 | orchestrator | Friday 23 May 2025 00:43:20 +0000 (0:00:01.357) 0:00:03.352 ************ 2025-05-23 00:44:25.296108 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-05-23 00:44:25.296124 | orchestrator | ok: [testbed-manager] 2025-05-23 00:44:25.296152 | orchestrator | 2025-05-23 00:44:25.296190 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-05-23 00:44:25.296207 | orchestrator | Friday 23 May 2025 00:43:59 +0000 (0:00:38.539) 0:00:41.891 ************ 2025-05-23 00:44:25.296222 | orchestrator | changed: [testbed-manager] 2025-05-23 00:44:25.296238 | orchestrator | 2025-05-23 00:44:25.296254 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-05-23 00:44:25.296269 | orchestrator | Friday 23 May 2025 00:44:00 +0000 (0:00:01.644) 0:00:43.535 ************ 2025-05-23 00:44:25.296285 | orchestrator | ok: [testbed-manager] 2025-05-23 00:44:25.296301 | orchestrator | 2025-05-23 00:44:25.296318 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-05-23 00:44:25.296334 | orchestrator | Friday 23 May 2025 00:44:02 +0000 (0:00:01.248) 0:00:44.784 ************ 2025-05-23 00:44:25.296349 | orchestrator | changed: [testbed-manager] 2025-05-23 00:44:25.296365 | orchestrator | 2025-05-23 00:44:25.296381 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-05-23 00:44:25.296405 | orchestrator | Friday 23 May 2025 00:44:04 +0000 (0:00:02.491) 0:00:47.275 ************ 2025-05-23 00:44:25.296421 | orchestrator | changed: [testbed-manager] 2025-05-23 00:44:25.296437 | orchestrator | 2025-05-23 00:44:25.296452 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-05-23 00:44:25.296468 | orchestrator | Friday 23 May 2025 00:44:05 +0000 (0:00:01.097) 0:00:48.373 ************ 2025-05-23 00:44:25.296484 | orchestrator | changed: [testbed-manager] 2025-05-23 00:44:25.296499 | orchestrator | 2025-05-23 00:44:25.296515 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-05-23 00:44:25.296623 | orchestrator | Friday 23 May 2025 00:44:06 +0000 (0:00:00.902) 0:00:49.276 ************ 2025-05-23 00:44:25.296644 | orchestrator | ok: [testbed-manager] 2025-05-23 00:44:25.296661 | orchestrator | 2025-05-23 00:44:25.296681 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 00:44:25.296699 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 00:44:25.296718 | orchestrator | 2025-05-23 00:44:25.296735 | orchestrator | Friday 23 May 2025 00:44:06 +0000 (0:00:00.425) 0:00:49.701 ************ 2025-05-23 00:44:25.296776 | orchestrator | =============================================================================== 2025-05-23 00:44:25.296795 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 38.54s 2025-05-23 00:44:25.296811 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.49s 2025-05-23 00:44:25.296828 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.64s 2025-05-23 00:44:25.296844 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.36s 2025-05-23 00:44:25.296859 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.26s 2025-05-23 00:44:25.296874 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.25s 2025-05-23 00:44:25.296890 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.10s 2025-05-23 00:44:25.296905 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.90s 2025-05-23 00:44:25.296920 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.44s 2025-05-23 00:44:25.296936 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.43s 2025-05-23 00:44:25.296952 | orchestrator | 2025-05-23 00:44:25.296967 | orchestrator | 2025-05-23 00:44:25.296982 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-23 00:44:25.296998 | orchestrator | 2025-05-23 00:44:25.297013 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-23 00:44:25.297029 | orchestrator | Friday 23 May 2025 00:43:18 +0000 (0:00:00.344) 0:00:00.344 ************ 2025-05-23 00:44:25.297045 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-05-23 00:44:25.297074 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-05-23 00:44:25.297090 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-05-23 00:44:25.297105 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-05-23 00:44:25.297121 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-05-23 00:44:25.297136 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-05-23 00:44:25.297151 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-05-23 00:44:25.297167 | orchestrator | 2025-05-23 00:44:25.297182 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-05-23 00:44:25.297198 | orchestrator | 2025-05-23 00:44:25.297213 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-05-23 00:44:25.297229 | orchestrator | Friday 23 May 2025 00:43:19 +0000 (0:00:01.144) 0:00:01.488 ************ 2025-05-23 00:44:25.297263 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 00:44:25.297283 | orchestrator | 2025-05-23 00:44:25.297299 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-05-23 00:44:25.297315 | orchestrator | Friday 23 May 2025 00:43:21 +0000 (0:00:02.414) 0:00:03.903 ************ 2025-05-23 00:44:25.297330 | orchestrator | ok: [testbed-manager] 2025-05-23 00:44:25.297346 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:44:25.297362 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:44:25.297446 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:44:25.297464 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:44:25.297479 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:44:25.297494 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:44:25.297509 | orchestrator | 2025-05-23 00:44:25.297525 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-05-23 00:44:25.297560 | orchestrator | Friday 23 May 2025 00:43:24 +0000 (0:00:02.157) 0:00:06.060 ************ 2025-05-23 00:44:25.297577 | orchestrator | ok: [testbed-manager] 2025-05-23 00:44:25.297593 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:44:25.297608 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:44:25.297623 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:44:25.297638 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:44:25.297653 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:44:25.297668 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:44:25.297683 | orchestrator | 2025-05-23 00:44:25.297699 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-05-23 00:44:25.297714 | orchestrator | Friday 23 May 2025 00:43:27 +0000 (0:00:03.484) 0:00:09.545 ************ 2025-05-23 00:44:25.297729 | orchestrator | changed: [testbed-manager] 2025-05-23 00:44:25.297745 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:44:25.297846 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:44:25.297863 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:44:25.297878 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:44:25.297894 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:44:25.297920 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:44:25.297936 | orchestrator | 2025-05-23 00:44:25.297952 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-05-23 00:44:25.297968 | orchestrator | Friday 23 May 2025 00:43:29 +0000 (0:00:02.160) 0:00:11.705 ************ 2025-05-23 00:44:25.297984 | orchestrator | changed: [testbed-manager] 2025-05-23 00:44:25.297999 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:44:25.298014 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:44:25.298117 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:44:25.298134 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:44:25.298152 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:44:25.298168 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:44:25.298185 | orchestrator | 2025-05-23 00:44:25.298201 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-05-23 00:44:25.298238 | orchestrator | Friday 23 May 2025 00:43:39 +0000 (0:00:09.553) 0:00:21.259 ************ 2025-05-23 00:44:25.298253 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:44:25.298268 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:44:25.298283 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:44:25.298298 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:44:25.298313 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:44:25.298328 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:44:25.298343 | orchestrator | changed: [testbed-manager] 2025-05-23 00:44:25.298358 | orchestrator | 2025-05-23 00:44:25.298373 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-05-23 00:44:25.298389 | orchestrator | Friday 23 May 2025 00:43:57 +0000 (0:00:18.381) 0:00:39.640 ************ 2025-05-23 00:44:25.298406 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 00:44:25.298423 | orchestrator | 2025-05-23 00:44:25.298438 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-05-23 00:44:25.298453 | orchestrator | Friday 23 May 2025 00:44:00 +0000 (0:00:02.422) 0:00:42.062 ************ 2025-05-23 00:44:25.298469 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-05-23 00:44:25.298484 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-05-23 00:44:25.298517 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-05-23 00:44:25.298532 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-05-23 00:44:25.298547 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-05-23 00:44:25.298563 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-05-23 00:44:25.298578 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-05-23 00:44:25.298594 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-05-23 00:44:25.298610 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-05-23 00:44:25.298625 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-05-23 00:44:25.298640 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-05-23 00:44:25.298655 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-05-23 00:44:25.298671 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-05-23 00:44:25.298686 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-05-23 00:44:25.298701 | orchestrator | 2025-05-23 00:44:25.298716 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-05-23 00:44:25.298732 | orchestrator | Friday 23 May 2025 00:44:07 +0000 (0:00:06.959) 0:00:49.022 ************ 2025-05-23 00:44:25.298747 | orchestrator | ok: [testbed-manager] 2025-05-23 00:44:25.298785 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:44:25.298799 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:44:25.298813 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:44:25.298826 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:44:25.298840 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:44:25.298854 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:44:25.298867 | orchestrator | 2025-05-23 00:44:25.298880 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-05-23 00:44:25.298894 | orchestrator | Friday 23 May 2025 00:44:08 +0000 (0:00:01.766) 0:00:50.789 ************ 2025-05-23 00:44:25.298907 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:44:25.298921 | orchestrator | changed: [testbed-manager] 2025-05-23 00:44:25.298935 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:44:25.298949 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:44:25.298962 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:44:25.298976 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:44:25.298989 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:44:25.299002 | orchestrator | 2025-05-23 00:44:25.299027 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-05-23 00:44:25.299041 | orchestrator | Friday 23 May 2025 00:44:10 +0000 (0:00:01.915) 0:00:52.705 ************ 2025-05-23 00:44:25.299054 | orchestrator | ok: [testbed-manager] 2025-05-23 00:44:25.299068 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:44:25.299082 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:44:25.299095 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:44:25.299124 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:44:25.299138 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:44:25.299152 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:44:25.299165 | orchestrator | 2025-05-23 00:44:25.299179 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-05-23 00:44:25.299193 | orchestrator | Friday 23 May 2025 00:44:12 +0000 (0:00:01.634) 0:00:54.339 ************ 2025-05-23 00:44:25.299206 | orchestrator | ok: [testbed-manager] 2025-05-23 00:44:25.299220 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:44:25.299234 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:44:25.299247 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:44:25.299261 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:44:25.299274 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:44:25.299288 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:44:25.299301 | orchestrator | 2025-05-23 00:44:25.299315 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-05-23 00:44:25.299329 | orchestrator | Friday 23 May 2025 00:44:15 +0000 (0:00:03.107) 0:00:57.447 ************ 2025-05-23 00:44:25.299352 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-05-23 00:44:25.299368 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 00:44:25.299383 | orchestrator | 2025-05-23 00:44:25.299398 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-05-23 00:44:25.299411 | orchestrator | Friday 23 May 2025 00:44:16 +0000 (0:00:01.094) 0:00:58.542 ************ 2025-05-23 00:44:25.299425 | orchestrator | changed: [testbed-manager] 2025-05-23 00:44:25.299439 | orchestrator | 2025-05-23 00:44:25.299452 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-05-23 00:44:25.299466 | orchestrator | Friday 23 May 2025 00:44:18 +0000 (0:00:02.343) 0:01:00.885 ************ 2025-05-23 00:44:25.299479 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:44:25.299493 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:44:25.299507 | orchestrator | changed: [testbed-manager] 2025-05-23 00:44:25.299521 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:44:25.299534 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:44:25.299548 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:44:25.299561 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:44:25.299575 | orchestrator | 2025-05-23 00:44:25.299589 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 00:44:25.299603 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 00:44:25.299617 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 00:44:25.299631 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 00:44:25.299646 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 00:44:25.299660 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 00:44:25.299673 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 00:44:25.299697 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 00:44:25.299710 | orchestrator | 2025-05-23 00:44:25.299724 | orchestrator | Friday 23 May 2025 00:44:22 +0000 (0:00:03.490) 0:01:04.377 ************ 2025-05-23 00:44:25.299738 | orchestrator | =============================================================================== 2025-05-23 00:44:25.299773 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 18.38s 2025-05-23 00:44:25.299792 | orchestrator | osism.services.netdata : Add repository --------------------------------- 9.55s 2025-05-23 00:44:25.299806 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 6.96s 2025-05-23 00:44:25.299820 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.49s 2025-05-23 00:44:25.299834 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.48s 2025-05-23 00:44:25.299849 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 3.11s 2025-05-23 00:44:25.299862 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 2.42s 2025-05-23 00:44:25.299877 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.41s 2025-05-23 00:44:25.299891 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.34s 2025-05-23 00:44:25.299905 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.16s 2025-05-23 00:44:25.299919 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.16s 2025-05-23 00:44:25.299935 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.92s 2025-05-23 00:44:25.299948 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.77s 2025-05-23 00:44:25.299962 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.63s 2025-05-23 00:44:25.299989 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.14s 2025-05-23 00:44:25.300020 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.09s 2025-05-23 00:44:25.300036 | orchestrator | 2025-05-23 00:44:25 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:44:25.300050 | orchestrator | 2025-05-23 00:44:25 | INFO  | Task dd48519b-f248-481d-8941-c3c8b5e97568 is in state SUCCESS 2025-05-23 00:44:25.300065 | orchestrator | 2025-05-23 00:44:25 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:44:25.300078 | orchestrator | 2025-05-23 00:44:25 | INFO  | Task 8d70d34a-53ce-491a-aa34-0bfb7001043c is in state STARTED 2025-05-23 00:44:25.300093 | orchestrator | 2025-05-23 00:44:25 | INFO  | Task 063d3bed-e92f-4371-835c-564bf2705ded is in state STARTED 2025-05-23 00:44:25.300109 | orchestrator | 2025-05-23 00:44:25 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:44:28.352894 | orchestrator | 2025-05-23 00:44:28 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:44:28.354429 | orchestrator | 2025-05-23 00:44:28 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:44:28.361602 | orchestrator | 2025-05-23 00:44:28 | INFO  | Task 8d70d34a-53ce-491a-aa34-0bfb7001043c is in state STARTED 2025-05-23 00:44:28.361658 | orchestrator | 2025-05-23 00:44:28 | INFO  | Task 063d3bed-e92f-4371-835c-564bf2705ded is in state STARTED 2025-05-23 00:44:28.361671 | orchestrator | 2025-05-23 00:44:28 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:44:31.406576 | orchestrator | 2025-05-23 00:44:31 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:44:31.408353 | orchestrator | 2025-05-23 00:44:31 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:44:31.412558 | orchestrator | 2025-05-23 00:44:31 | INFO  | Task 8d70d34a-53ce-491a-aa34-0bfb7001043c is in state STARTED 2025-05-23 00:44:31.412973 | orchestrator | 2025-05-23 00:44:31 | INFO  | Task 063d3bed-e92f-4371-835c-564bf2705ded is in state STARTED 2025-05-23 00:44:31.413004 | orchestrator | 2025-05-23 00:44:31 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:44:34.455951 | orchestrator | 2025-05-23 00:44:34 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:44:34.456603 | orchestrator | 2025-05-23 00:44:34 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:44:34.457518 | orchestrator | 2025-05-23 00:44:34 | INFO  | Task 8d70d34a-53ce-491a-aa34-0bfb7001043c is in state STARTED 2025-05-23 00:44:34.458808 | orchestrator | 2025-05-23 00:44:34 | INFO  | Task 063d3bed-e92f-4371-835c-564bf2705ded is in state STARTED 2025-05-23 00:44:34.458846 | orchestrator | 2025-05-23 00:44:34 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:44:37.518135 | orchestrator | 2025-05-23 00:44:37 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:44:37.518680 | orchestrator | 2025-05-23 00:44:37 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:44:37.519543 | orchestrator | 2025-05-23 00:44:37 | INFO  | Task 8d70d34a-53ce-491a-aa34-0bfb7001043c is in state STARTED 2025-05-23 00:44:37.519874 | orchestrator | 2025-05-23 00:44:37 | INFO  | Task 063d3bed-e92f-4371-835c-564bf2705ded is in state STARTED 2025-05-23 00:44:37.519894 | orchestrator | 2025-05-23 00:44:37 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:44:40.566919 | orchestrator | 2025-05-23 00:44:40 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:44:40.570985 | orchestrator | 2025-05-23 00:44:40 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:44:40.575500 | orchestrator | 2025-05-23 00:44:40 | INFO  | Task 8d70d34a-53ce-491a-aa34-0bfb7001043c is in state STARTED 2025-05-23 00:44:40.577640 | orchestrator | 2025-05-23 00:44:40 | INFO  | Task 063d3bed-e92f-4371-835c-564bf2705ded is in state STARTED 2025-05-23 00:44:40.577662 | orchestrator | 2025-05-23 00:44:40 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:44:43.621965 | orchestrator | 2025-05-23 00:44:43 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:44:43.622683 | orchestrator | 2025-05-23 00:44:43 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:44:43.624080 | orchestrator | 2025-05-23 00:44:43 | INFO  | Task 8d70d34a-53ce-491a-aa34-0bfb7001043c is in state STARTED 2025-05-23 00:44:43.625331 | orchestrator | 2025-05-23 00:44:43 | INFO  | Task 063d3bed-e92f-4371-835c-564bf2705ded is in state STARTED 2025-05-23 00:44:43.625445 | orchestrator | 2025-05-23 00:44:43 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:44:46.707177 | orchestrator | 2025-05-23 00:44:46 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:44:46.707338 | orchestrator | 2025-05-23 00:44:46 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:44:46.707828 | orchestrator | 2025-05-23 00:44:46 | INFO  | Task 8d70d34a-53ce-491a-aa34-0bfb7001043c is in state STARTED 2025-05-23 00:44:46.708645 | orchestrator | 2025-05-23 00:44:46 | INFO  | Task 063d3bed-e92f-4371-835c-564bf2705ded is in state SUCCESS 2025-05-23 00:44:46.708695 | orchestrator | 2025-05-23 00:44:46 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:44:49.756713 | orchestrator | 2025-05-23 00:44:49 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:44:49.757084 | orchestrator | 2025-05-23 00:44:49 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:44:49.758124 | orchestrator | 2025-05-23 00:44:49 | INFO  | Task 8d70d34a-53ce-491a-aa34-0bfb7001043c is in state STARTED 2025-05-23 00:44:49.758228 | orchestrator | 2025-05-23 00:44:49 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:44:52.799763 | orchestrator | 2025-05-23 00:44:52 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:44:52.801544 | orchestrator | 2025-05-23 00:44:52 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:44:52.803402 | orchestrator | 2025-05-23 00:44:52 | INFO  | Task 8d70d34a-53ce-491a-aa34-0bfb7001043c is in state STARTED 2025-05-23 00:44:52.803493 | orchestrator | 2025-05-23 00:44:52 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:44:55.850140 | orchestrator | 2025-05-23 00:44:55 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:44:55.850934 | orchestrator | 2025-05-23 00:44:55 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:44:55.852041 | orchestrator | 2025-05-23 00:44:55 | INFO  | Task 8d70d34a-53ce-491a-aa34-0bfb7001043c is in state STARTED 2025-05-23 00:44:55.852069 | orchestrator | 2025-05-23 00:44:55 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:44:58.894418 | orchestrator | 2025-05-23 00:44:58 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:44:58.896577 | orchestrator | 2025-05-23 00:44:58 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:44:58.896625 | orchestrator | 2025-05-23 00:44:58 | INFO  | Task 8d70d34a-53ce-491a-aa34-0bfb7001043c is in state STARTED 2025-05-23 00:44:58.896648 | orchestrator | 2025-05-23 00:44:58 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:45:01.954937 | orchestrator | 2025-05-23 00:45:01 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:45:01.957576 | orchestrator | 2025-05-23 00:45:01 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:45:01.957613 | orchestrator | 2025-05-23 00:45:01 | INFO  | Task 8d70d34a-53ce-491a-aa34-0bfb7001043c is in state STARTED 2025-05-23 00:45:01.957626 | orchestrator | 2025-05-23 00:45:01 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:45:05.018099 | orchestrator | 2025-05-23 00:45:05 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:45:05.018380 | orchestrator | 2025-05-23 00:45:05 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:45:05.020768 | orchestrator | 2025-05-23 00:45:05 | INFO  | Task 8d70d34a-53ce-491a-aa34-0bfb7001043c is in state STARTED 2025-05-23 00:45:05.020822 | orchestrator | 2025-05-23 00:45:05 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:45:08.069000 | orchestrator | 2025-05-23 00:45:08 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:45:08.069354 | orchestrator | 2025-05-23 00:45:08 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:45:08.070111 | orchestrator | 2025-05-23 00:45:08 | INFO  | Task 8d70d34a-53ce-491a-aa34-0bfb7001043c is in state STARTED 2025-05-23 00:45:08.070583 | orchestrator | 2025-05-23 00:45:08 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:45:11.112922 | orchestrator | 2025-05-23 00:45:11 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:45:11.115174 | orchestrator | 2025-05-23 00:45:11 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:45:11.115905 | orchestrator | 2025-05-23 00:45:11 | INFO  | Task 8d70d34a-53ce-491a-aa34-0bfb7001043c is in state STARTED 2025-05-23 00:45:11.116090 | orchestrator | 2025-05-23 00:45:11 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:45:14.153718 | orchestrator | 2025-05-23 00:45:14 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:45:14.156890 | orchestrator | 2025-05-23 00:45:14 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:45:14.157059 | orchestrator | 2025-05-23 00:45:14 | INFO  | Task 8d70d34a-53ce-491a-aa34-0bfb7001043c is in state STARTED 2025-05-23 00:45:14.157099 | orchestrator | 2025-05-23 00:45:14 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:45:17.199533 | orchestrator | 2025-05-23 00:45:17 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:45:17.201649 | orchestrator | 2025-05-23 00:45:17 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:45:17.201701 | orchestrator | 2025-05-23 00:45:17 | INFO  | Task 8d70d34a-53ce-491a-aa34-0bfb7001043c is in state STARTED 2025-05-23 00:45:17.201715 | orchestrator | 2025-05-23 00:45:17 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:45:20.240276 | orchestrator | 2025-05-23 00:45:20 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:45:20.240971 | orchestrator | 2025-05-23 00:45:20 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:45:20.243810 | orchestrator | 2025-05-23 00:45:20 | INFO  | Task 8d70d34a-53ce-491a-aa34-0bfb7001043c is in state STARTED 2025-05-23 00:45:20.243860 | orchestrator | 2025-05-23 00:45:20 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:45:23.304529 | orchestrator | 2025-05-23 00:45:23 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:45:23.304640 | orchestrator | 2025-05-23 00:45:23 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:45:23.304656 | orchestrator | 2025-05-23 00:45:23 | INFO  | Task 8d70d34a-53ce-491a-aa34-0bfb7001043c is in state STARTED 2025-05-23 00:45:23.306705 | orchestrator | 2025-05-23 00:45:23 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:45:26.354775 | orchestrator | 2025-05-23 00:45:26 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:45:26.356092 | orchestrator | 2025-05-23 00:45:26 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:45:26.356123 | orchestrator | 2025-05-23 00:45:26 | INFO  | Task 8d70d34a-53ce-491a-aa34-0bfb7001043c is in state STARTED 2025-05-23 00:45:26.356135 | orchestrator | 2025-05-23 00:45:26 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:45:29.407146 | orchestrator | 2025-05-23 00:45:29 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:45:29.408769 | orchestrator | 2025-05-23 00:45:29 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:45:29.411297 | orchestrator | 2025-05-23 00:45:29 | INFO  | Task 8d70d34a-53ce-491a-aa34-0bfb7001043c is in state STARTED 2025-05-23 00:45:29.411322 | orchestrator | 2025-05-23 00:45:29 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:45:32.453007 | orchestrator | 2025-05-23 00:45:32 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:45:32.453217 | orchestrator | 2025-05-23 00:45:32 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:45:32.455911 | orchestrator | 2025-05-23 00:45:32.455936 | orchestrator | 2025-05-23 00:45:32.455948 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-05-23 00:45:32.455959 | orchestrator | 2025-05-23 00:45:32.455970 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-05-23 00:45:32.455981 | orchestrator | Friday 23 May 2025 00:43:32 +0000 (0:00:00.255) 0:00:00.255 ************ 2025-05-23 00:45:32.455992 | orchestrator | ok: [testbed-manager] 2025-05-23 00:45:32.456004 | orchestrator | 2025-05-23 00:45:32.456015 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-05-23 00:45:32.456026 | orchestrator | Friday 23 May 2025 00:43:33 +0000 (0:00:01.104) 0:00:01.359 ************ 2025-05-23 00:45:32.456037 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-05-23 00:45:32.456048 | orchestrator | 2025-05-23 00:45:32.456058 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-05-23 00:45:32.456069 | orchestrator | Friday 23 May 2025 00:43:33 +0000 (0:00:00.590) 0:00:01.949 ************ 2025-05-23 00:45:32.456079 | orchestrator | changed: [testbed-manager] 2025-05-23 00:45:32.456090 | orchestrator | 2025-05-23 00:45:32.456101 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-05-23 00:45:32.456111 | orchestrator | Friday 23 May 2025 00:43:35 +0000 (0:00:01.546) 0:00:03.496 ************ 2025-05-23 00:45:32.456122 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-05-23 00:45:32.456132 | orchestrator | ok: [testbed-manager] 2025-05-23 00:45:32.456143 | orchestrator | 2025-05-23 00:45:32.456153 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-05-23 00:45:32.456164 | orchestrator | Friday 23 May 2025 00:44:36 +0000 (0:01:00.882) 0:01:04.378 ************ 2025-05-23 00:45:32.456174 | orchestrator | changed: [testbed-manager] 2025-05-23 00:45:32.456185 | orchestrator | 2025-05-23 00:45:32.456195 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 00:45:32.456206 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 00:45:32.456218 | orchestrator | 2025-05-23 00:45:32.456230 | orchestrator | Friday 23 May 2025 00:44:45 +0000 (0:00:08.783) 0:01:13.162 ************ 2025-05-23 00:45:32.456256 | orchestrator | =============================================================================== 2025-05-23 00:45:32.456267 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 60.88s 2025-05-23 00:45:32.456278 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 8.78s 2025-05-23 00:45:32.456288 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.55s 2025-05-23 00:45:32.456299 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.10s 2025-05-23 00:45:32.456309 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.59s 2025-05-23 00:45:32.456320 | orchestrator | 2025-05-23 00:45:32.456351 | orchestrator | 2025-05-23 00:45:32 | INFO  | Task 8d70d34a-53ce-491a-aa34-0bfb7001043c is in state SUCCESS 2025-05-23 00:45:32.457864 | orchestrator | 2025-05-23 00:45:32.457981 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-05-23 00:45:32.457998 | orchestrator | 2025-05-23 00:45:32.458010 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-05-23 00:45:32.458098 | orchestrator | Friday 23 May 2025 00:43:13 +0000 (0:00:00.278) 0:00:00.278 ************ 2025-05-23 00:45:32.458111 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 00:45:32.458124 | orchestrator | 2025-05-23 00:45:32.458135 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-05-23 00:45:32.458165 | orchestrator | Friday 23 May 2025 00:43:15 +0000 (0:00:01.287) 0:00:01.566 ************ 2025-05-23 00:45:32.458176 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-23 00:45:32.458187 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-23 00:45:32.458198 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-23 00:45:32.458208 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-23 00:45:32.458219 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-23 00:45:32.458231 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-23 00:45:32.458241 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-23 00:45:32.458252 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-23 00:45:32.458263 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-23 00:45:32.458273 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-23 00:45:32.458283 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-23 00:45:32.458294 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-23 00:45:32.458305 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-23 00:45:32.458315 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-23 00:45:32.458325 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-23 00:45:32.458336 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-23 00:45:32.458347 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-23 00:45:32.458358 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-23 00:45:32.458370 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-23 00:45:32.458383 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-23 00:45:32.458395 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-23 00:45:32.458407 | orchestrator | 2025-05-23 00:45:32.458419 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-05-23 00:45:32.458431 | orchestrator | Friday 23 May 2025 00:43:18 +0000 (0:00:03.516) 0:00:05.082 ************ 2025-05-23 00:45:32.458444 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 00:45:32.458458 | orchestrator | 2025-05-23 00:45:32.458471 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-05-23 00:45:32.458482 | orchestrator | Friday 23 May 2025 00:43:20 +0000 (0:00:01.657) 0:00:06.740 ************ 2025-05-23 00:45:32.458508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-23 00:45:32.458526 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-23 00:45:32.458568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-23 00:45:32.458583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-23 00:45:32.458595 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-23 00:45:32.458608 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-23 00:45:32.458623 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-23 00:45:32.458636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:45:32.458653 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:45:32.458681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:45:32.458696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:45:32.458709 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:45:32.458721 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:45:32.458733 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:45:32.458750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:45:32.458779 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:45:32.458829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:45:32.458842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:45:32.458853 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:45:32.458864 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:45:32.458875 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:45:32.458885 | orchestrator | 2025-05-23 00:45:32.458897 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-05-23 00:45:32.458908 | orchestrator | Friday 23 May 2025 00:43:24 +0000 (0:00:04.505) 0:00:11.245 ************ 2025-05-23 00:45:32.458919 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-23 00:45:32.458931 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:45:32.458949 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:45:32.458961 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:45:32.458978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-23 00:45:32.458997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:45:32.459009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:45:32.459020 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:45:32.459031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-23 00:45:32.459043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:45:32.459054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:45:32.459072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-23 00:45:32.459088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:45:32.459111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:45:32.459124 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:45:32.459135 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:45:32.459146 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-23 00:45:32.459158 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:45:32.459169 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:45:32.459180 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:45:32.459191 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-23 00:45:32.459208 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:45:32.459223 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:45:32.459234 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:45:32.459258 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-23 00:45:32.459270 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:45:32.459281 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:45:32.459292 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:45:32.459303 | orchestrator | 2025-05-23 00:45:32.459314 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-05-23 00:45:32.459325 | orchestrator | Friday 23 May 2025 00:43:26 +0000 (0:00:01.560) 0:00:12.806 ************ 2025-05-23 00:45:32.459336 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-23 00:45:32.459348 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:45:32.459365 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:45:32.459376 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:45:32.459392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-23 00:45:32.459409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:45:32.459421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:45:32.459432 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:45:32.459443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-23 00:45:32.459454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:45:32.459466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:45:32.459483 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:45:32.459494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-23 00:45:32.459505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:45:32.459521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:45:32.460057 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-23 00:45:32.460080 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:45:32.460092 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:45:32.460103 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:45:32.460114 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:45:32.460125 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-23 00:45:32.460146 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:45:32.460157 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:45:32.460168 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:45:32.460184 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-23 00:45:32.460203 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:45:32.460216 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:45:32.460227 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:45:32.460237 | orchestrator | 2025-05-23 00:45:32.460248 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-05-23 00:45:32.460259 | orchestrator | Friday 23 May 2025 00:43:29 +0000 (0:00:02.706) 0:00:15.513 ************ 2025-05-23 00:45:32.460270 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:45:32.460280 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:45:32.460291 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:45:32.460302 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:45:32.460312 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:45:32.460323 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:45:32.460333 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:45:32.460350 | orchestrator | 2025-05-23 00:45:32.460361 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-05-23 00:45:32.460372 | orchestrator | Friday 23 May 2025 00:43:29 +0000 (0:00:00.735) 0:00:16.248 ************ 2025-05-23 00:45:32.460382 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:45:32.460393 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:45:32.460403 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:45:32.460414 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:45:32.460424 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:45:32.460435 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:45:32.460445 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:45:32.460456 | orchestrator | 2025-05-23 00:45:32.460473 | orchestrator | TASK [common : Ensure fluentd image is present for label check] **************** 2025-05-23 00:45:32.460492 | orchestrator | Friday 23 May 2025 00:43:30 +0000 (0:00:00.944) 0:00:17.192 ************ 2025-05-23 00:45:32.460511 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:45:32.460531 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:45:32.460550 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:45:32.460568 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:45:32.460587 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:45:32.460605 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:45:32.460625 | orchestrator | changed: [testbed-manager] 2025-05-23 00:45:32.460645 | orchestrator | 2025-05-23 00:45:32.460666 | orchestrator | TASK [common : Fetch fluentd Docker image labels] ****************************** 2025-05-23 00:45:32.460688 | orchestrator | Friday 23 May 2025 00:44:06 +0000 (0:00:35.192) 0:00:52.385 ************ 2025-05-23 00:45:32.460707 | orchestrator | ok: [testbed-manager] 2025-05-23 00:45:32.460727 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:45:32.460747 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:45:32.460764 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:45:32.460869 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:45:32.460894 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:45:32.460912 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:45:32.460923 | orchestrator | 2025-05-23 00:45:32.460934 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-05-23 00:45:32.460946 | orchestrator | Friday 23 May 2025 00:44:08 +0000 (0:00:02.755) 0:00:55.140 ************ 2025-05-23 00:45:32.460956 | orchestrator | ok: [testbed-manager] 2025-05-23 00:45:32.460967 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:45:32.460977 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:45:32.460988 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:45:32.460999 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:45:32.461010 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:45:32.461019 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:45:32.461028 | orchestrator | 2025-05-23 00:45:32.461038 | orchestrator | TASK [common : Fetch fluentd Podman image labels] ****************************** 2025-05-23 00:45:32.461047 | orchestrator | Friday 23 May 2025 00:44:10 +0000 (0:00:01.304) 0:00:56.445 ************ 2025-05-23 00:45:32.461057 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:45:32.461066 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:45:32.461075 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:45:32.461085 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:45:32.461094 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:45:32.461103 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:45:32.461113 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:45:32.461122 | orchestrator | 2025-05-23 00:45:32.461131 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-05-23 00:45:32.461141 | orchestrator | Friday 23 May 2025 00:44:11 +0000 (0:00:00.933) 0:00:57.378 ************ 2025-05-23 00:45:32.461150 | orchestrator | skipping: [testbed-manager] 2025-05-23 00:45:32.461160 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:45:32.461169 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:45:32.461178 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:45:32.461188 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:45:32.461206 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:45:32.461221 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:45:32.461231 | orchestrator | 2025-05-23 00:45:32.461241 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-05-23 00:45:32.461250 | orchestrator | Friday 23 May 2025 00:44:11 +0000 (0:00:00.909) 0:00:58.287 ************ 2025-05-23 00:45:32.461269 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-23 00:45:32.461280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-23 00:45:32.461290 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:45:32.461300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-23 00:45:32.461310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-23 00:45:32.461320 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-23 00:45:32.461330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:45:32.461348 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:45:32.461364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:45:32.461374 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-23 00:45:32.461384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:45:32.461404 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:45:32.461414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:45:32.461424 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-23 00:45:32.461443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:45:32.461458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:45:32.461469 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:45:32.461479 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:45:32.461489 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:45:32.461499 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:45:32.461510 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:45:32.461519 | orchestrator | 2025-05-23 00:45:32.461529 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-05-23 00:45:32.461545 | orchestrator | Friday 23 May 2025 00:44:17 +0000 (0:00:05.874) 0:01:04.162 ************ 2025-05-23 00:45:32.461555 | orchestrator | [WARNING]: Skipped 2025-05-23 00:45:32.461565 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-05-23 00:45:32.461575 | orchestrator | to this access issue: 2025-05-23 00:45:32.461584 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-05-23 00:45:32.461594 | orchestrator | directory 2025-05-23 00:45:32.461603 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-23 00:45:32.461612 | orchestrator | 2025-05-23 00:45:32.461622 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-05-23 00:45:32.461631 | orchestrator | Friday 23 May 2025 00:44:18 +0000 (0:00:00.730) 0:01:04.892 ************ 2025-05-23 00:45:32.461641 | orchestrator | [WARNING]: Skipped 2025-05-23 00:45:32.461650 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-05-23 00:45:32.461659 | orchestrator | to this access issue: 2025-05-23 00:45:32.461669 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-05-23 00:45:32.461678 | orchestrator | directory 2025-05-23 00:45:32.461691 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-23 00:45:32.461701 | orchestrator | 2025-05-23 00:45:32.461711 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-05-23 00:45:32.461720 | orchestrator | Friday 23 May 2025 00:44:19 +0000 (0:00:01.047) 0:01:05.940 ************ 2025-05-23 00:45:32.461730 | orchestrator | [WARNING]: Skipped 2025-05-23 00:45:32.461739 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-05-23 00:45:32.461748 | orchestrator | to this access issue: 2025-05-23 00:45:32.461758 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-05-23 00:45:32.461767 | orchestrator | directory 2025-05-23 00:45:32.461777 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-23 00:45:32.461806 | orchestrator | 2025-05-23 00:45:32.461816 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-05-23 00:45:32.461831 | orchestrator | Friday 23 May 2025 00:44:19 +0000 (0:00:00.427) 0:01:06.367 ************ 2025-05-23 00:45:32.461841 | orchestrator | [WARNING]: Skipped 2025-05-23 00:45:32.461850 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-05-23 00:45:32.461860 | orchestrator | to this access issue: 2025-05-23 00:45:32.461869 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-05-23 00:45:32.461879 | orchestrator | directory 2025-05-23 00:45:32.461888 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-23 00:45:32.461898 | orchestrator | 2025-05-23 00:45:32.461907 | orchestrator | TASK [common : Copying over td-agent.conf] ************************************* 2025-05-23 00:45:32.461917 | orchestrator | Friday 23 May 2025 00:44:20 +0000 (0:00:00.529) 0:01:06.897 ************ 2025-05-23 00:45:32.461926 | orchestrator | changed: [testbed-manager] 2025-05-23 00:45:32.461935 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:45:32.461945 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:45:32.461954 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:45:32.461964 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:45:32.461973 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:45:32.461982 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:45:32.461992 | orchestrator | 2025-05-23 00:45:32.462001 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-05-23 00:45:32.462011 | orchestrator | Friday 23 May 2025 00:44:25 +0000 (0:00:04.660) 0:01:11.558 ************ 2025-05-23 00:45:32.462065 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-23 00:45:32.462076 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-23 00:45:32.462086 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-23 00:45:32.462102 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-23 00:45:32.462112 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-23 00:45:32.462121 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-23 00:45:32.462130 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-23 00:45:32.462140 | orchestrator | 2025-05-23 00:45:32.462149 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-05-23 00:45:32.462159 | orchestrator | Friday 23 May 2025 00:44:28 +0000 (0:00:03.259) 0:01:14.817 ************ 2025-05-23 00:45:32.462169 | orchestrator | changed: [testbed-manager] 2025-05-23 00:45:32.462178 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:45:32.462188 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:45:32.462197 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:45:32.462207 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:45:32.462216 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:45:32.462226 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:45:32.462235 | orchestrator | 2025-05-23 00:45:32.462245 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-05-23 00:45:32.462254 | orchestrator | Friday 23 May 2025 00:44:30 +0000 (0:00:02.066) 0:01:16.883 ************ 2025-05-23 00:45:32.462264 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-23 00:45:32.462275 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:45:32.462289 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-23 00:45:32.462307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:45:32.462319 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:45:32.462346 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-23 00:45:32.462356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:45:32.462366 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:45:32.462377 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-23 00:45:32.462391 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:45:32.462408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:45:32.462419 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-23 00:45:32.462435 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:45:32.462445 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-23 00:45:32.462455 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:45:32.462465 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-23 00:45:32.462479 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:45:32.462489 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:45:32.462504 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:45:32.462520 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:45:32.462530 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:45:32.462540 | orchestrator | 2025-05-23 00:45:32.462550 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-05-23 00:45:32.462559 | orchestrator | Friday 23 May 2025 00:44:32 +0000 (0:00:02.053) 0:01:18.937 ************ 2025-05-23 00:45:32.462569 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-23 00:45:32.462579 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-23 00:45:32.462588 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-23 00:45:32.462598 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-23 00:45:32.462607 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-23 00:45:32.462617 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-23 00:45:32.462626 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-23 00:45:32.462635 | orchestrator | 2025-05-23 00:45:32.462645 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-05-23 00:45:32.462655 | orchestrator | Friday 23 May 2025 00:44:35 +0000 (0:00:02.582) 0:01:21.519 ************ 2025-05-23 00:45:32.462664 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-23 00:45:32.462674 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-23 00:45:32.462683 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-23 00:45:32.462693 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-23 00:45:32.462702 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-23 00:45:32.462712 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-23 00:45:32.462721 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-23 00:45:32.462730 | orchestrator | 2025-05-23 00:45:32.462740 | orchestrator | TASK [common : Check common containers] **************************************** 2025-05-23 00:45:32.462749 | orchestrator | Friday 23 May 2025 00:44:38 +0000 (0:00:03.600) 0:01:25.120 ************ 2025-05-23 00:45:32.462766 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-23 00:45:32.462936 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:45:32.462964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-23 00:45:32.462975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-23 00:45:32.462985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-23 00:45:32.462996 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-23 00:45:32.463006 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:45:32.463016 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:45:32.463033 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:45:32.463061 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-23 00:45:32.463072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:45:32.463082 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-23 00:45:32.463092 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:45:32.463102 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:45:32.463112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:45:32.463123 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:45:32.463138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:45:32.463154 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:45:32.463165 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:45:32.463175 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:45:32.463185 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:45:32.463195 | orchestrator | 2025-05-23 00:45:32.463205 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-05-23 00:45:32.463215 | orchestrator | Friday 23 May 2025 00:44:42 +0000 (0:00:03.774) 0:01:28.894 ************ 2025-05-23 00:45:32.463225 | orchestrator | changed: [testbed-manager] 2025-05-23 00:45:32.463235 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:45:32.463245 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:45:32.463254 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:45:32.463264 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:45:32.463273 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:45:32.463283 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:45:32.463292 | orchestrator | 2025-05-23 00:45:32.463302 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-05-23 00:45:32.463312 | orchestrator | Friday 23 May 2025 00:44:44 +0000 (0:00:01.909) 0:01:30.803 ************ 2025-05-23 00:45:32.463321 | orchestrator | changed: [testbed-manager] 2025-05-23 00:45:32.463336 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:45:32.463344 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:45:32.463352 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:45:32.463359 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:45:32.463367 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:45:32.463375 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:45:32.463382 | orchestrator | 2025-05-23 00:45:32.463390 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-23 00:45:32.463398 | orchestrator | Friday 23 May 2025 00:44:46 +0000 (0:00:01.799) 0:01:32.603 ************ 2025-05-23 00:45:32.463406 | orchestrator | 2025-05-23 00:45:32.463414 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-23 00:45:32.463422 | orchestrator | Friday 23 May 2025 00:44:46 +0000 (0:00:00.069) 0:01:32.672 ************ 2025-05-23 00:45:32.463429 | orchestrator | 2025-05-23 00:45:32.463437 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-23 00:45:32.463445 | orchestrator | Friday 23 May 2025 00:44:46 +0000 (0:00:00.076) 0:01:32.749 ************ 2025-05-23 00:45:32.463453 | orchestrator | 2025-05-23 00:45:32.463460 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-23 00:45:32.463472 | orchestrator | Friday 23 May 2025 00:44:46 +0000 (0:00:00.065) 0:01:32.814 ************ 2025-05-23 00:45:32.463481 | orchestrator | 2025-05-23 00:45:32.463489 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-23 00:45:32.463499 | orchestrator | Friday 23 May 2025 00:44:46 +0000 (0:00:00.352) 0:01:33.166 ************ 2025-05-23 00:45:32.463507 | orchestrator | 2025-05-23 00:45:32.463515 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-23 00:45:32.463523 | orchestrator | Friday 23 May 2025 00:44:46 +0000 (0:00:00.055) 0:01:33.222 ************ 2025-05-23 00:45:32.463530 | orchestrator | 2025-05-23 00:45:32.463538 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-23 00:45:32.463546 | orchestrator | Friday 23 May 2025 00:44:46 +0000 (0:00:00.054) 0:01:33.276 ************ 2025-05-23 00:45:32.463554 | orchestrator | 2025-05-23 00:45:32.463561 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-05-23 00:45:32.463569 | orchestrator | Friday 23 May 2025 00:44:46 +0000 (0:00:00.075) 0:01:33.352 ************ 2025-05-23 00:45:32.463577 | orchestrator | changed: [testbed-manager] 2025-05-23 00:45:32.463589 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:45:32.463598 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:45:32.463605 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:45:32.463613 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:45:32.463621 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:45:32.463629 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:45:32.463636 | orchestrator | 2025-05-23 00:45:32.463644 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-05-23 00:45:32.463652 | orchestrator | Friday 23 May 2025 00:44:55 +0000 (0:00:08.901) 0:01:42.254 ************ 2025-05-23 00:45:32.463660 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:45:32.463668 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:45:32.463675 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:45:32.463683 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:45:32.463691 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:45:32.463698 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:45:32.463706 | orchestrator | changed: [testbed-manager] 2025-05-23 00:45:32.463714 | orchestrator | 2025-05-23 00:45:32.463722 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-05-23 00:45:32.463730 | orchestrator | Friday 23 May 2025 00:45:23 +0000 (0:00:27.252) 0:02:09.506 ************ 2025-05-23 00:45:32.463737 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:45:32.463745 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:45:32.463753 | orchestrator | ok: [testbed-manager] 2025-05-23 00:45:32.463761 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:45:32.463769 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:45:32.463804 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:45:32.463814 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:45:32.463821 | orchestrator | 2025-05-23 00:45:32.463829 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-05-23 00:45:32.463837 | orchestrator | Friday 23 May 2025 00:45:25 +0000 (0:00:02.503) 0:02:12.010 ************ 2025-05-23 00:45:32.463845 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:45:32.463853 | orchestrator | changed: [testbed-manager] 2025-05-23 00:45:32.463861 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:45:32.463869 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:45:32.463877 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:45:32.463884 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:45:32.463892 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:45:32.463900 | orchestrator | 2025-05-23 00:45:32.463908 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 00:45:32.463917 | orchestrator | testbed-manager : ok=25  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-23 00:45:32.463925 | orchestrator | testbed-node-0 : ok=21  changed=14  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-23 00:45:32.463933 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-23 00:45:32.463941 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-23 00:45:32.463949 | orchestrator | testbed-node-3 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-23 00:45:32.463957 | orchestrator | testbed-node-4 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-23 00:45:32.463965 | orchestrator | testbed-node-5 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-23 00:45:32.463973 | orchestrator | 2025-05-23 00:45:32.463981 | orchestrator | 2025-05-23 00:45:32.463989 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-23 00:45:32.463997 | orchestrator | Friday 23 May 2025 00:45:30 +0000 (0:00:04.542) 0:02:16.553 ************ 2025-05-23 00:45:32.464005 | orchestrator | =============================================================================== 2025-05-23 00:45:32.464012 | orchestrator | common : Ensure fluentd image is present for label check --------------- 35.19s 2025-05-23 00:45:32.464020 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 27.25s 2025-05-23 00:45:32.464028 | orchestrator | common : Restart fluentd container -------------------------------------- 8.90s 2025-05-23 00:45:32.464036 | orchestrator | common : Copying over config.json files for services -------------------- 5.87s 2025-05-23 00:45:32.464043 | orchestrator | common : Copying over td-agent.conf ------------------------------------- 4.66s 2025-05-23 00:45:32.464051 | orchestrator | common : Restart cron container ----------------------------------------- 4.54s 2025-05-23 00:45:32.464059 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.51s 2025-05-23 00:45:32.464070 | orchestrator | common : Check common containers ---------------------------------------- 3.78s 2025-05-23 00:45:32.464078 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.60s 2025-05-23 00:45:32.464086 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.52s 2025-05-23 00:45:32.464094 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.26s 2025-05-23 00:45:32.464101 | orchestrator | common : Fetch fluentd Docker image labels ------------------------------ 2.76s 2025-05-23 00:45:32.464109 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.71s 2025-05-23 00:45:32.464122 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.58s 2025-05-23 00:45:32.464134 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.50s 2025-05-23 00:45:32.464143 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.07s 2025-05-23 00:45:32.464150 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.05s 2025-05-23 00:45:32.464158 | orchestrator | common : Creating log volume -------------------------------------------- 1.91s 2025-05-23 00:45:32.464166 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.80s 2025-05-23 00:45:32.464174 | orchestrator | common : include_tasks -------------------------------------------------- 1.66s 2025-05-23 00:45:32.464182 | orchestrator | 2025-05-23 00:45:32 | INFO  | Task 7d29bed6-9b13-4b88-b35d-7db993bdb5b3 is in state STARTED 2025-05-23 00:45:32.464190 | orchestrator | 2025-05-23 00:45:32 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:45:32.464198 | orchestrator | 2025-05-23 00:45:32 | INFO  | Task 2af9f3cc-6416-434b-84cb-80c48e2df7d7 is in state STARTED 2025-05-23 00:45:32.464206 | orchestrator | 2025-05-23 00:45:32 | INFO  | Task 1d3eef55-991f-44ac-94c6-780cfce24cd4 is in state STARTED 2025-05-23 00:45:32.464213 | orchestrator | 2025-05-23 00:45:32 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:45:35.504295 | orchestrator | 2025-05-23 00:45:35 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:45:35.504518 | orchestrator | 2025-05-23 00:45:35 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:45:35.505511 | orchestrator | 2025-05-23 00:45:35 | INFO  | Task 7d29bed6-9b13-4b88-b35d-7db993bdb5b3 is in state STARTED 2025-05-23 00:45:35.506512 | orchestrator | 2025-05-23 00:45:35 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:45:35.507135 | orchestrator | 2025-05-23 00:45:35 | INFO  | Task 2af9f3cc-6416-434b-84cb-80c48e2df7d7 is in state STARTED 2025-05-23 00:45:35.508116 | orchestrator | 2025-05-23 00:45:35 | INFO  | Task 1d3eef55-991f-44ac-94c6-780cfce24cd4 is in state STARTED 2025-05-23 00:45:35.508770 | orchestrator | 2025-05-23 00:45:35 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:45:38.546072 | orchestrator | 2025-05-23 00:45:38 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:45:38.547318 | orchestrator | 2025-05-23 00:45:38 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:45:38.548577 | orchestrator | 2025-05-23 00:45:38 | INFO  | Task 7d29bed6-9b13-4b88-b35d-7db993bdb5b3 is in state STARTED 2025-05-23 00:45:38.549412 | orchestrator | 2025-05-23 00:45:38 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:45:38.550453 | orchestrator | 2025-05-23 00:45:38 | INFO  | Task 2af9f3cc-6416-434b-84cb-80c48e2df7d7 is in state STARTED 2025-05-23 00:45:38.551466 | orchestrator | 2025-05-23 00:45:38 | INFO  | Task 1d3eef55-991f-44ac-94c6-780cfce24cd4 is in state STARTED 2025-05-23 00:45:38.551490 | orchestrator | 2025-05-23 00:45:38 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:45:41.586919 | orchestrator | 2025-05-23 00:45:41 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:45:41.587330 | orchestrator | 2025-05-23 00:45:41 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:45:41.588199 | orchestrator | 2025-05-23 00:45:41 | INFO  | Task 7d29bed6-9b13-4b88-b35d-7db993bdb5b3 is in state STARTED 2025-05-23 00:45:41.589231 | orchestrator | 2025-05-23 00:45:41 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:45:41.590186 | orchestrator | 2025-05-23 00:45:41 | INFO  | Task 2af9f3cc-6416-434b-84cb-80c48e2df7d7 is in state STARTED 2025-05-23 00:45:41.591293 | orchestrator | 2025-05-23 00:45:41 | INFO  | Task 1d3eef55-991f-44ac-94c6-780cfce24cd4 is in state STARTED 2025-05-23 00:45:41.591337 | orchestrator | 2025-05-23 00:45:41 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:45:44.620613 | orchestrator | 2025-05-23 00:45:44 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:45:44.622474 | orchestrator | 2025-05-23 00:45:44 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:45:44.623574 | orchestrator | 2025-05-23 00:45:44 | INFO  | Task 7d29bed6-9b13-4b88-b35d-7db993bdb5b3 is in state STARTED 2025-05-23 00:45:44.624623 | orchestrator | 2025-05-23 00:45:44 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:45:44.629076 | orchestrator | 2025-05-23 00:45:44 | INFO  | Task 2af9f3cc-6416-434b-84cb-80c48e2df7d7 is in state STARTED 2025-05-23 00:45:44.634601 | orchestrator | 2025-05-23 00:45:44 | INFO  | Task 1d3eef55-991f-44ac-94c6-780cfce24cd4 is in state STARTED 2025-05-23 00:45:44.634646 | orchestrator | 2025-05-23 00:45:44 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:45:47.669174 | orchestrator | 2025-05-23 00:45:47 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:45:47.669585 | orchestrator | 2025-05-23 00:45:47 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:45:47.670871 | orchestrator | 2025-05-23 00:45:47 | INFO  | Task 7d29bed6-9b13-4b88-b35d-7db993bdb5b3 is in state STARTED 2025-05-23 00:45:47.671820 | orchestrator | 2025-05-23 00:45:47 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:45:47.672836 | orchestrator | 2025-05-23 00:45:47 | INFO  | Task 2af9f3cc-6416-434b-84cb-80c48e2df7d7 is in state STARTED 2025-05-23 00:45:47.674641 | orchestrator | 2025-05-23 00:45:47 | INFO  | Task 1d3eef55-991f-44ac-94c6-780cfce24cd4 is in state STARTED 2025-05-23 00:45:47.674673 | orchestrator | 2025-05-23 00:45:47 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:45:50.715598 | orchestrator | 2025-05-23 00:45:50 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:45:50.715930 | orchestrator | 2025-05-23 00:45:50 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:45:50.717348 | orchestrator | 2025-05-23 00:45:50 | INFO  | Task 7d29bed6-9b13-4b88-b35d-7db993bdb5b3 is in state STARTED 2025-05-23 00:45:50.720072 | orchestrator | 2025-05-23 00:45:50 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:45:50.726323 | orchestrator | 2025-05-23 00:45:50 | INFO  | Task 2af9f3cc-6416-434b-84cb-80c48e2df7d7 is in state STARTED 2025-05-23 00:45:50.727324 | orchestrator | 2025-05-23 00:45:50 | INFO  | Task 1d3eef55-991f-44ac-94c6-780cfce24cd4 is in state STARTED 2025-05-23 00:45:50.727367 | orchestrator | 2025-05-23 00:45:50 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:45:53.781970 | orchestrator | 2025-05-23 00:45:53 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:45:53.782226 | orchestrator | 2025-05-23 00:45:53 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:45:53.782983 | orchestrator | 2025-05-23 00:45:53 | INFO  | Task 7d29bed6-9b13-4b88-b35d-7db993bdb5b3 is in state STARTED 2025-05-23 00:45:53.784051 | orchestrator | 2025-05-23 00:45:53 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:45:53.784652 | orchestrator | 2025-05-23 00:45:53 | INFO  | Task 2af9f3cc-6416-434b-84cb-80c48e2df7d7 is in state STARTED 2025-05-23 00:45:53.785383 | orchestrator | 2025-05-23 00:45:53 | INFO  | Task 1ffd0cbb-671d-4875-a1b7-2542265b148e is in state STARTED 2025-05-23 00:45:53.786246 | orchestrator | 2025-05-23 00:45:53 | INFO  | Task 1d3eef55-991f-44ac-94c6-780cfce24cd4 is in state SUCCESS 2025-05-23 00:45:53.786271 | orchestrator | 2025-05-23 00:45:53 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:45:56.829109 | orchestrator | 2025-05-23 00:45:56 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:45:56.829868 | orchestrator | 2025-05-23 00:45:56 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:45:56.829906 | orchestrator | 2025-05-23 00:45:56 | INFO  | Task 7d29bed6-9b13-4b88-b35d-7db993bdb5b3 is in state STARTED 2025-05-23 00:45:56.830946 | orchestrator | 2025-05-23 00:45:56 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:45:56.831307 | orchestrator | 2025-05-23 00:45:56 | INFO  | Task 2af9f3cc-6416-434b-84cb-80c48e2df7d7 is in state STARTED 2025-05-23 00:45:56.831329 | orchestrator | 2025-05-23 00:45:56 | INFO  | Task 1ffd0cbb-671d-4875-a1b7-2542265b148e is in state STARTED 2025-05-23 00:45:56.831358 | orchestrator | 2025-05-23 00:45:56 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:45:59.874765 | orchestrator | 2025-05-23 00:45:59 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:45:59.876464 | orchestrator | 2025-05-23 00:45:59 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:45:59.876497 | orchestrator | 2025-05-23 00:45:59 | INFO  | Task 7d29bed6-9b13-4b88-b35d-7db993bdb5b3 is in state STARTED 2025-05-23 00:45:59.876510 | orchestrator | 2025-05-23 00:45:59 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:45:59.876522 | orchestrator | 2025-05-23 00:45:59 | INFO  | Task 2af9f3cc-6416-434b-84cb-80c48e2df7d7 is in state STARTED 2025-05-23 00:45:59.877007 | orchestrator | 2025-05-23 00:45:59 | INFO  | Task 1ffd0cbb-671d-4875-a1b7-2542265b148e is in state STARTED 2025-05-23 00:45:59.877209 | orchestrator | 2025-05-23 00:45:59 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:46:02.907418 | orchestrator | 2025-05-23 00:46:02 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:46:02.907844 | orchestrator | 2025-05-23 00:46:02 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:46:02.908151 | orchestrator | 2025-05-23 00:46:02 | INFO  | Task 7d29bed6-9b13-4b88-b35d-7db993bdb5b3 is in state STARTED 2025-05-23 00:46:02.908614 | orchestrator | 2025-05-23 00:46:02 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:46:02.910113 | orchestrator | 2025-05-23 00:46:02 | INFO  | Task 2af9f3cc-6416-434b-84cb-80c48e2df7d7 is in state STARTED 2025-05-23 00:46:02.910458 | orchestrator | 2025-05-23 00:46:02 | INFO  | Task 1ffd0cbb-671d-4875-a1b7-2542265b148e is in state STARTED 2025-05-23 00:46:02.910488 | orchestrator | 2025-05-23 00:46:02 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:46:05.957967 | orchestrator | 2025-05-23 00:46:05 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:46:05.959972 | orchestrator | 2025-05-23 00:46:05 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:46:05.961286 | orchestrator | 2025-05-23 00:46:05 | INFO  | Task 7d29bed6-9b13-4b88-b35d-7db993bdb5b3 is in state SUCCESS 2025-05-23 00:46:05.963108 | orchestrator | 2025-05-23 00:46:05.963155 | orchestrator | 2025-05-23 00:46:05.963168 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-23 00:46:05.963179 | orchestrator | 2025-05-23 00:46:05.963190 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-23 00:46:05.963202 | orchestrator | Friday 23 May 2025 00:45:34 +0000 (0:00:00.460) 0:00:00.460 ************ 2025-05-23 00:46:05.963213 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:46:05.963225 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:46:05.963235 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:46:05.963246 | orchestrator | 2025-05-23 00:46:05.963256 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-23 00:46:05.963267 | orchestrator | Friday 23 May 2025 00:45:35 +0000 (0:00:00.541) 0:00:01.002 ************ 2025-05-23 00:46:05.963278 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-05-23 00:46:05.963289 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-05-23 00:46:05.963299 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-05-23 00:46:05.963310 | orchestrator | 2025-05-23 00:46:05.963320 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-05-23 00:46:05.963331 | orchestrator | 2025-05-23 00:46:05.963341 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-05-23 00:46:05.963352 | orchestrator | Friday 23 May 2025 00:45:35 +0000 (0:00:00.526) 0:00:01.528 ************ 2025-05-23 00:46:05.963363 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:46:05.963373 | orchestrator | 2025-05-23 00:46:05.963384 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-05-23 00:46:05.963394 | orchestrator | Friday 23 May 2025 00:45:36 +0000 (0:00:00.829) 0:00:02.357 ************ 2025-05-23 00:46:05.963405 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-05-23 00:46:05.963416 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-05-23 00:46:05.963426 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-05-23 00:46:05.963437 | orchestrator | 2025-05-23 00:46:05.963447 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-05-23 00:46:05.963458 | orchestrator | Friday 23 May 2025 00:45:37 +0000 (0:00:00.967) 0:00:03.325 ************ 2025-05-23 00:46:05.963468 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-05-23 00:46:05.963479 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-05-23 00:46:05.963489 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-05-23 00:46:05.963500 | orchestrator | 2025-05-23 00:46:05.963510 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-05-23 00:46:05.963521 | orchestrator | Friday 23 May 2025 00:45:39 +0000 (0:00:02.156) 0:00:05.482 ************ 2025-05-23 00:46:05.963531 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:46:05.963543 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:46:05.963553 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:46:05.963564 | orchestrator | 2025-05-23 00:46:05.963583 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-05-23 00:46:05.963594 | orchestrator | Friday 23 May 2025 00:45:42 +0000 (0:00:02.536) 0:00:08.018 ************ 2025-05-23 00:46:05.963605 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:46:05.963615 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:46:05.963626 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:46:05.963636 | orchestrator | 2025-05-23 00:46:05.963647 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 00:46:05.963658 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 00:46:05.963669 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 00:46:05.963694 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 00:46:05.963707 | orchestrator | 2025-05-23 00:46:05.963719 | orchestrator | 2025-05-23 00:46:05.963732 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-23 00:46:05.963744 | orchestrator | Friday 23 May 2025 00:45:50 +0000 (0:00:08.411) 0:00:16.430 ************ 2025-05-23 00:46:05.963756 | orchestrator | =============================================================================== 2025-05-23 00:46:05.963768 | orchestrator | memcached : Restart memcached container --------------------------------- 8.41s 2025-05-23 00:46:05.963805 | orchestrator | memcached : Check memcached container ----------------------------------- 2.54s 2025-05-23 00:46:05.963824 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.16s 2025-05-23 00:46:05.963836 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.97s 2025-05-23 00:46:05.963849 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.83s 2025-05-23 00:46:05.963861 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.54s 2025-05-23 00:46:05.963873 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.53s 2025-05-23 00:46:05.963885 | orchestrator | 2025-05-23 00:46:05.963898 | orchestrator | 2025-05-23 00:46:05.963909 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-23 00:46:05.963962 | orchestrator | 2025-05-23 00:46:05.963978 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-23 00:46:05.963990 | orchestrator | Friday 23 May 2025 00:45:33 +0000 (0:00:00.336) 0:00:00.336 ************ 2025-05-23 00:46:05.964003 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:46:05.964015 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:46:05.964027 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:46:05.964040 | orchestrator | 2025-05-23 00:46:05.964053 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-23 00:46:05.964078 | orchestrator | Friday 23 May 2025 00:45:34 +0000 (0:00:00.616) 0:00:00.952 ************ 2025-05-23 00:46:05.964089 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-05-23 00:46:05.964101 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-05-23 00:46:05.964111 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-05-23 00:46:05.964122 | orchestrator | 2025-05-23 00:46:05.964133 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-05-23 00:46:05.964144 | orchestrator | 2025-05-23 00:46:05.964155 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-05-23 00:46:05.964165 | orchestrator | Friday 23 May 2025 00:45:34 +0000 (0:00:00.362) 0:00:01.314 ************ 2025-05-23 00:46:05.964176 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:46:05.964187 | orchestrator | 2025-05-23 00:46:05.964198 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-05-23 00:46:05.964209 | orchestrator | Friday 23 May 2025 00:45:35 +0000 (0:00:00.795) 0:00:02.109 ************ 2025-05-23 00:46:05.964223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-23 00:46:05.964239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-23 00:46:05.964264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-23 00:46:05.964277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-23 00:46:05.964290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-23 00:46:05.964310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-23 00:46:05.964322 | orchestrator | 2025-05-23 00:46:05.964334 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-05-23 00:46:05.964345 | orchestrator | Friday 23 May 2025 00:45:37 +0000 (0:00:01.642) 0:00:03.752 ************ 2025-05-23 00:46:05.964357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-23 00:46:05.964368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-23 00:46:05.964390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-23 00:46:05.964401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-23 00:46:05.964413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-23 00:46:05.964440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-23 00:46:05.964452 | orchestrator | 2025-05-23 00:46:05.964463 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-05-23 00:46:05.964474 | orchestrator | Friday 23 May 2025 00:45:39 +0000 (0:00:02.697) 0:00:06.449 ************ 2025-05-23 00:46:05.964485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-23 00:46:05.964503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-23 00:46:05.964519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-23 00:46:05.964531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-23 00:46:05.964542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-23 00:46:05.964560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-23 00:46:05.964571 | orchestrator | 2025-05-23 00:46:05.964582 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-05-23 00:46:05.964593 | orchestrator | Friday 23 May 2025 00:45:43 +0000 (0:00:03.411) 0:00:09.861 ************ 2025-05-23 00:46:05.964604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-23 00:46:05.964622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-23 00:46:05.964637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-23 00:46:05.964649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-23 00:46:05.964660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-23 00:46:05.964677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-23 00:46:05.964689 | orchestrator | 2025-05-23 00:46:05.964700 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-23 00:46:05.964711 | orchestrator | Friday 23 May 2025 00:45:45 +0000 (0:00:02.274) 0:00:12.135 ************ 2025-05-23 00:46:05.964722 | orchestrator | 2025-05-23 00:46:05.964733 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-23 00:46:05.964744 | orchestrator | Friday 23 May 2025 00:45:45 +0000 (0:00:00.055) 0:00:12.191 ************ 2025-05-23 00:46:05.964761 | orchestrator | 2025-05-23 00:46:05.964772 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-23 00:46:05.964807 | orchestrator | Friday 23 May 2025 00:45:45 +0000 (0:00:00.102) 0:00:12.294 ************ 2025-05-23 00:46:05.964820 | orchestrator | 2025-05-23 00:46:05.964830 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-05-23 00:46:05.964841 | orchestrator | Friday 23 May 2025 00:45:45 +0000 (0:00:00.106) 0:00:12.401 ************ 2025-05-23 00:46:05.964852 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:46:05.964863 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:46:05.964874 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:46:05.964884 | orchestrator | 2025-05-23 00:46:05.964895 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-05-23 00:46:05.964906 | orchestrator | Friday 23 May 2025 00:45:54 +0000 (0:00:09.016) 0:00:21.417 ************ 2025-05-23 00:46:05.964916 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:46:05.964927 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:46:05.964938 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:46:05.964949 | orchestrator | 2025-05-23 00:46:05.964959 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 00:46:05.964971 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 00:46:05.964982 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 00:46:05.964993 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 00:46:05.965003 | orchestrator | 2025-05-23 00:46:05.965014 | orchestrator | 2025-05-23 00:46:05.965025 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-23 00:46:05.965036 | orchestrator | Friday 23 May 2025 00:46:02 +0000 (0:00:08.234) 0:00:29.651 ************ 2025-05-23 00:46:05.965046 | orchestrator | =============================================================================== 2025-05-23 00:46:05.965057 | orchestrator | redis : Restart redis container ----------------------------------------- 9.02s 2025-05-23 00:46:05.965068 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 8.23s 2025-05-23 00:46:05.965079 | orchestrator | redis : Copying over redis config files --------------------------------- 3.41s 2025-05-23 00:46:05.965089 | orchestrator | redis : Copying over default config.json files -------------------------- 2.70s 2025-05-23 00:46:05.965100 | orchestrator | redis : Check redis containers ------------------------------------------ 2.27s 2025-05-23 00:46:05.965111 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.64s 2025-05-23 00:46:05.965121 | orchestrator | redis : include_tasks --------------------------------------------------- 0.79s 2025-05-23 00:46:05.965132 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.62s 2025-05-23 00:46:05.965143 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.36s 2025-05-23 00:46:05.965153 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.27s 2025-05-23 00:46:05.965243 | orchestrator | 2025-05-23 00:46:05 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:46:05.965829 | orchestrator | 2025-05-23 00:46:05 | INFO  | Task 2af9f3cc-6416-434b-84cb-80c48e2df7d7 is in state STARTED 2025-05-23 00:46:05.969106 | orchestrator | 2025-05-23 00:46:05 | INFO  | Task 1ffd0cbb-671d-4875-a1b7-2542265b148e is in state STARTED 2025-05-23 00:46:05.969480 | orchestrator | 2025-05-23 00:46:05 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:46:09.015542 | orchestrator | 2025-05-23 00:46:09 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:46:09.015616 | orchestrator | 2025-05-23 00:46:09 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:46:09.016196 | orchestrator | 2025-05-23 00:46:09 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:46:09.016965 | orchestrator | 2025-05-23 00:46:09 | INFO  | Task 2af9f3cc-6416-434b-84cb-80c48e2df7d7 is in state STARTED 2025-05-23 00:46:09.017753 | orchestrator | 2025-05-23 00:46:09 | INFO  | Task 1ffd0cbb-671d-4875-a1b7-2542265b148e is in state STARTED 2025-05-23 00:46:09.017780 | orchestrator | 2025-05-23 00:46:09 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:46:12.056164 | orchestrator | 2025-05-23 00:46:12 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:46:12.056257 | orchestrator | 2025-05-23 00:46:12 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:46:12.056951 | orchestrator | 2025-05-23 00:46:12 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:46:12.057813 | orchestrator | 2025-05-23 00:46:12 | INFO  | Task 2af9f3cc-6416-434b-84cb-80c48e2df7d7 is in state STARTED 2025-05-23 00:46:12.058554 | orchestrator | 2025-05-23 00:46:12 | INFO  | Task 1ffd0cbb-671d-4875-a1b7-2542265b148e is in state STARTED 2025-05-23 00:46:12.058619 | orchestrator | 2025-05-23 00:46:12 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:46:15.100111 | orchestrator | 2025-05-23 00:46:15 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:46:15.100488 | orchestrator | 2025-05-23 00:46:15 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:46:15.103033 | orchestrator | 2025-05-23 00:46:15 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:46:15.103759 | orchestrator | 2025-05-23 00:46:15 | INFO  | Task 2af9f3cc-6416-434b-84cb-80c48e2df7d7 is in state STARTED 2025-05-23 00:46:15.105752 | orchestrator | 2025-05-23 00:46:15 | INFO  | Task 1ffd0cbb-671d-4875-a1b7-2542265b148e is in state STARTED 2025-05-23 00:46:15.105775 | orchestrator | 2025-05-23 00:46:15 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:46:18.147119 | orchestrator | 2025-05-23 00:46:18 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:46:18.148116 | orchestrator | 2025-05-23 00:46:18 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:46:18.150352 | orchestrator | 2025-05-23 00:46:18 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:46:18.152699 | orchestrator | 2025-05-23 00:46:18 | INFO  | Task 2af9f3cc-6416-434b-84cb-80c48e2df7d7 is in state STARTED 2025-05-23 00:46:18.154192 | orchestrator | 2025-05-23 00:46:18 | INFO  | Task 1ffd0cbb-671d-4875-a1b7-2542265b148e is in state STARTED 2025-05-23 00:46:18.154232 | orchestrator | 2025-05-23 00:46:18 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:46:21.199338 | orchestrator | 2025-05-23 00:46:21 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:46:21.202454 | orchestrator | 2025-05-23 00:46:21 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:46:21.202999 | orchestrator | 2025-05-23 00:46:21 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:46:21.206704 | orchestrator | 2025-05-23 00:46:21 | INFO  | Task 2af9f3cc-6416-434b-84cb-80c48e2df7d7 is in state STARTED 2025-05-23 00:46:21.206726 | orchestrator | 2025-05-23 00:46:21 | INFO  | Task 1ffd0cbb-671d-4875-a1b7-2542265b148e is in state STARTED 2025-05-23 00:46:21.206734 | orchestrator | 2025-05-23 00:46:21 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:46:24.243453 | orchestrator | 2025-05-23 00:46:24 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:46:24.243641 | orchestrator | 2025-05-23 00:46:24 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:46:24.244489 | orchestrator | 2025-05-23 00:46:24 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:46:24.245375 | orchestrator | 2025-05-23 00:46:24 | INFO  | Task 2af9f3cc-6416-434b-84cb-80c48e2df7d7 is in state STARTED 2025-05-23 00:46:24.246290 | orchestrator | 2025-05-23 00:46:24 | INFO  | Task 1ffd0cbb-671d-4875-a1b7-2542265b148e is in state STARTED 2025-05-23 00:46:24.246325 | orchestrator | 2025-05-23 00:46:24 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:46:27.279520 | orchestrator | 2025-05-23 00:46:27 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:46:27.281539 | orchestrator | 2025-05-23 00:46:27 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:46:27.282472 | orchestrator | 2025-05-23 00:46:27 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:46:27.284179 | orchestrator | 2025-05-23 00:46:27 | INFO  | Task 2af9f3cc-6416-434b-84cb-80c48e2df7d7 is in state STARTED 2025-05-23 00:46:27.288942 | orchestrator | 2025-05-23 00:46:27 | INFO  | Task 1ffd0cbb-671d-4875-a1b7-2542265b148e is in state STARTED 2025-05-23 00:46:27.290166 | orchestrator | 2025-05-23 00:46:27 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:46:30.333153 | orchestrator | 2025-05-23 00:46:30 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:46:30.333279 | orchestrator | 2025-05-23 00:46:30 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:46:30.336300 | orchestrator | 2025-05-23 00:46:30 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:46:30.337299 | orchestrator | 2025-05-23 00:46:30 | INFO  | Task 2af9f3cc-6416-434b-84cb-80c48e2df7d7 is in state STARTED 2025-05-23 00:46:30.338475 | orchestrator | 2025-05-23 00:46:30 | INFO  | Task 1ffd0cbb-671d-4875-a1b7-2542265b148e is in state STARTED 2025-05-23 00:46:30.338596 | orchestrator | 2025-05-23 00:46:30 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:46:33.379610 | orchestrator | 2025-05-23 00:46:33 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:46:33.381350 | orchestrator | 2025-05-23 00:46:33 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:46:33.384677 | orchestrator | 2025-05-23 00:46:33 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:46:33.386876 | orchestrator | 2025-05-23 00:46:33 | INFO  | Task 2af9f3cc-6416-434b-84cb-80c48e2df7d7 is in state STARTED 2025-05-23 00:46:33.389361 | orchestrator | 2025-05-23 00:46:33 | INFO  | Task 1ffd0cbb-671d-4875-a1b7-2542265b148e is in state STARTED 2025-05-23 00:46:33.389385 | orchestrator | 2025-05-23 00:46:33 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:46:36.439676 | orchestrator | 2025-05-23 00:46:36 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:46:36.441233 | orchestrator | 2025-05-23 00:46:36 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:46:36.442196 | orchestrator | 2025-05-23 00:46:36 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:46:36.443887 | orchestrator | 2025-05-23 00:46:36 | INFO  | Task 2af9f3cc-6416-434b-84cb-80c48e2df7d7 is in state STARTED 2025-05-23 00:46:36.445533 | orchestrator | 2025-05-23 00:46:36 | INFO  | Task 1ffd0cbb-671d-4875-a1b7-2542265b148e is in state STARTED 2025-05-23 00:46:36.445557 | orchestrator | 2025-05-23 00:46:36 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:46:39.492246 | orchestrator | 2025-05-23 00:46:39 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:46:39.493667 | orchestrator | 2025-05-23 00:46:39 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:46:39.495382 | orchestrator | 2025-05-23 00:46:39 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:46:39.496934 | orchestrator | 2025-05-23 00:46:39 | INFO  | Task 2af9f3cc-6416-434b-84cb-80c48e2df7d7 is in state STARTED 2025-05-23 00:46:39.498340 | orchestrator | 2025-05-23 00:46:39 | INFO  | Task 1ffd0cbb-671d-4875-a1b7-2542265b148e is in state STARTED 2025-05-23 00:46:39.498366 | orchestrator | 2025-05-23 00:46:39 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:46:42.541762 | orchestrator | 2025-05-23 00:46:42 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:46:42.542999 | orchestrator | 2025-05-23 00:46:42 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:46:42.543943 | orchestrator | 2025-05-23 00:46:42 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:46:42.545325 | orchestrator | 2025-05-23 00:46:42 | INFO  | Task 2af9f3cc-6416-434b-84cb-80c48e2df7d7 is in state STARTED 2025-05-23 00:46:42.546299 | orchestrator | 2025-05-23 00:46:42 | INFO  | Task 1ffd0cbb-671d-4875-a1b7-2542265b148e is in state STARTED 2025-05-23 00:46:42.546327 | orchestrator | 2025-05-23 00:46:42 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:46:45.596536 | orchestrator | 2025-05-23 00:46:45 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:46:45.596651 | orchestrator | 2025-05-23 00:46:45 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:46:45.597347 | orchestrator | 2025-05-23 00:46:45 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:46:45.601379 | orchestrator | 2025-05-23 00:46:45.601422 | orchestrator | 2025-05-23 00:46:45.601434 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-23 00:46:45.601446 | orchestrator | 2025-05-23 00:46:45.601457 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-23 00:46:45.601468 | orchestrator | Friday 23 May 2025 00:45:33 +0000 (0:00:00.380) 0:00:00.380 ************ 2025-05-23 00:46:45.601479 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:46:45.601491 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:46:45.601502 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:46:45.601512 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:46:45.601523 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:46:45.601533 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:46:45.601544 | orchestrator | 2025-05-23 00:46:45.601555 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-23 00:46:45.601565 | orchestrator | Friday 23 May 2025 00:45:34 +0000 (0:00:00.573) 0:00:00.954 ************ 2025-05-23 00:46:45.601576 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-23 00:46:45.601587 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-23 00:46:45.601602 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-23 00:46:45.601614 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-23 00:46:45.601625 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-23 00:46:45.601657 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-23 00:46:45.601668 | orchestrator | 2025-05-23 00:46:45.601678 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-05-23 00:46:45.601689 | orchestrator | 2025-05-23 00:46:45.601699 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-05-23 00:46:45.601710 | orchestrator | Friday 23 May 2025 00:45:35 +0000 (0:00:00.738) 0:00:01.692 ************ 2025-05-23 00:46:45.601722 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 00:46:45.601734 | orchestrator | 2025-05-23 00:46:45.601744 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-23 00:46:45.601755 | orchestrator | Friday 23 May 2025 00:45:36 +0000 (0:00:01.575) 0:00:03.268 ************ 2025-05-23 00:46:45.601766 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-05-23 00:46:45.601820 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-05-23 00:46:45.601833 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-05-23 00:46:45.601844 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-05-23 00:46:45.601854 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-05-23 00:46:45.601865 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-05-23 00:46:45.601876 | orchestrator | 2025-05-23 00:46:45.601886 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-23 00:46:45.601897 | orchestrator | Friday 23 May 2025 00:45:38 +0000 (0:00:01.671) 0:00:04.940 ************ 2025-05-23 00:46:45.601907 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-05-23 00:46:45.601918 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-05-23 00:46:45.601929 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-05-23 00:46:45.601947 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-05-23 00:46:45.601960 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-05-23 00:46:45.601972 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-05-23 00:46:45.601985 | orchestrator | 2025-05-23 00:46:45.601997 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-23 00:46:45.602008 | orchestrator | Friday 23 May 2025 00:45:40 +0000 (0:00:02.061) 0:00:07.002 ************ 2025-05-23 00:46:45.602089 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-05-23 00:46:45.602102 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:46:45.602116 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-05-23 00:46:45.602127 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:46:45.602139 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-05-23 00:46:45.602151 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:46:45.602163 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-05-23 00:46:45.602175 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:46:45.602187 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-05-23 00:46:45.602199 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:46:45.602211 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-05-23 00:46:45.602223 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:46:45.602236 | orchestrator | 2025-05-23 00:46:45.602248 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-05-23 00:46:45.602260 | orchestrator | Friday 23 May 2025 00:45:41 +0000 (0:00:01.163) 0:00:08.166 ************ 2025-05-23 00:46:45.602272 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:46:45.602285 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:46:45.602297 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:46:45.602308 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:46:45.602319 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:46:45.602329 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:46:45.602349 | orchestrator | 2025-05-23 00:46:45.602361 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-05-23 00:46:45.602372 | orchestrator | Friday 23 May 2025 00:45:42 +0000 (0:00:00.817) 0:00:08.984 ************ 2025-05-23 00:46:45.602403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-23 00:46:45.602421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-23 00:46:45.602434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-23 00:46:45.602450 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-23 00:46:45.602462 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-23 00:46:45.602481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-23 00:46:45.602500 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-23 00:46:45.602512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-23 00:46:45.602523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-23 00:46:45.602539 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-23 00:46:45.602551 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-23 00:46:45.602574 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-23 00:46:45.602587 | orchestrator | 2025-05-23 00:46:45.602598 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-05-23 00:46:45.602609 | orchestrator | Friday 23 May 2025 00:45:44 +0000 (0:00:02.029) 0:00:11.013 ************ 2025-05-23 00:46:45.602621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-23 00:46:45.602632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-23 00:46:45.602648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-23 00:46:45.602661 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-23 00:46:45.602678 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-23 00:46:45.602696 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-23 00:46:45.602708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-23 00:46:45.602719 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-23 00:46:45.602734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-23 00:46:45.602746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-23 00:46:45.602769 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-23 00:46:45.602800 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-23 00:46:45.602812 | orchestrator | 2025-05-23 00:46:45.602823 | orchestrator | TASK [openvswitch : Copying over start-ovs file for openvswitch-vswitchd] ****** 2025-05-23 00:46:45.602834 | orchestrator | Friday 23 May 2025 00:45:46 +0000 (0:00:02.403) 0:00:13.417 ************ 2025-05-23 00:46:45.602846 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:46:45.602857 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:46:45.602867 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:46:45.602878 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:46:45.602889 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:46:45.602899 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:46:45.602909 | orchestrator | 2025-05-23 00:46:45.602920 | orchestrator | TASK [openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server] *** 2025-05-23 00:46:45.602931 | orchestrator | Friday 23 May 2025 00:45:49 +0000 (0:00:02.276) 0:00:15.693 ************ 2025-05-23 00:46:45.602942 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:46:45.602952 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:46:45.602962 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:46:45.602973 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:46:45.602983 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:46:45.602994 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:46:45.603004 | orchestrator | 2025-05-23 00:46:45.603015 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-05-23 00:46:45.603025 | orchestrator | Friday 23 May 2025 00:45:52 +0000 (0:00:03.580) 0:00:19.274 ************ 2025-05-23 00:46:45.603036 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:46:45.603047 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:46:45.603057 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:46:45.603068 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:46:45.603078 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:46:45.603089 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:46:45.603099 | orchestrator | 2025-05-23 00:46:45.603110 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-05-23 00:46:45.603127 | orchestrator | Friday 23 May 2025 00:45:54 +0000 (0:00:01.921) 0:00:21.196 ************ 2025-05-23 00:46:45.603143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-23 00:46:45.603154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-23 00:46:45.603171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-23 00:46:45.603183 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-23 00:46:45.603194 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-23 00:46:45.603205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-23 00:46:45.603232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-23 00:46:45.603243 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-23 00:46:45.603263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-23 00:46:45.603274 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-23 00:46:45.603285 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-23 00:46:45.603302 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-23 00:46:45.603313 | orchestrator | 2025-05-23 00:46:45.603324 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-23 00:46:45.603335 | orchestrator | Friday 23 May 2025 00:45:58 +0000 (0:00:03.957) 0:00:25.154 ************ 2025-05-23 00:46:45.603345 | orchestrator | 2025-05-23 00:46:45.603356 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-23 00:46:45.603367 | orchestrator | Friday 23 May 2025 00:45:58 +0000 (0:00:00.219) 0:00:25.373 ************ 2025-05-23 00:46:45.603377 | orchestrator | 2025-05-23 00:46:45.603388 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-23 00:46:45.603398 | orchestrator | Friday 23 May 2025 00:45:59 +0000 (0:00:00.188) 0:00:25.562 ************ 2025-05-23 00:46:45.603409 | orchestrator | 2025-05-23 00:46:45.603419 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-23 00:46:45.603430 | orchestrator | Friday 23 May 2025 00:45:59 +0000 (0:00:00.087) 0:00:25.649 ************ 2025-05-23 00:46:45.603440 | orchestrator | 2025-05-23 00:46:45.603451 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-23 00:46:45.603462 | orchestrator | Friday 23 May 2025 00:45:59 +0000 (0:00:00.163) 0:00:25.812 ************ 2025-05-23 00:46:45.603472 | orchestrator | 2025-05-23 00:46:45.603483 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-23 00:46:45.603493 | orchestrator | Friday 23 May 2025 00:45:59 +0000 (0:00:00.082) 0:00:25.895 ************ 2025-05-23 00:46:45.603504 | orchestrator | 2025-05-23 00:46:45.603514 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-05-23 00:46:45.603525 | orchestrator | Friday 23 May 2025 00:45:59 +0000 (0:00:00.223) 0:00:26.119 ************ 2025-05-23 00:46:45.603535 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:46:45.603546 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:46:45.603556 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:46:45.603567 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:46:45.603577 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:46:45.603588 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:46:45.603598 | orchestrator | 2025-05-23 00:46:45.603609 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-05-23 00:46:45.603620 | orchestrator | Friday 23 May 2025 00:46:08 +0000 (0:00:09.308) 0:00:35.428 ************ 2025-05-23 00:46:45.603636 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:46:45.603647 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:46:45.603658 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:46:45.603668 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:46:45.603679 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:46:45.603689 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:46:45.603700 | orchestrator | 2025-05-23 00:46:45.603710 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-05-23 00:46:45.603721 | orchestrator | Friday 23 May 2025 00:46:10 +0000 (0:00:01.722) 0:00:37.150 ************ 2025-05-23 00:46:45.603732 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:46:45.603742 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:46:45.603760 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:46:45.603770 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:46:45.603852 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:46:45.603863 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:46:45.603874 | orchestrator | 2025-05-23 00:46:45.603884 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-05-23 00:46:45.603895 | orchestrator | Friday 23 May 2025 00:46:22 +0000 (0:00:11.346) 0:00:48.497 ************ 2025-05-23 00:46:45.603906 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-05-23 00:46:45.603917 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-05-23 00:46:45.603927 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-05-23 00:46:45.603938 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-05-23 00:46:45.603949 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-05-23 00:46:45.603966 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-05-23 00:46:45.603977 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-05-23 00:46:45.603988 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-05-23 00:46:45.603999 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-05-23 00:46:45.604009 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-05-23 00:46:45.604020 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-05-23 00:46:45.604030 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-05-23 00:46:45.604041 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-23 00:46:45.604051 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-23 00:46:45.604066 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-23 00:46:45.604076 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-23 00:46:45.604087 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-23 00:46:45.604098 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-23 00:46:45.604109 | orchestrator | 2025-05-23 00:46:45.604119 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-05-23 00:46:45.604130 | orchestrator | Friday 23 May 2025 00:46:30 +0000 (0:00:08.525) 0:00:57.023 ************ 2025-05-23 00:46:45.604141 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-05-23 00:46:45.604152 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:46:45.604162 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-05-23 00:46:45.604173 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:46:45.604183 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-05-23 00:46:45.604194 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:46:45.604204 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-05-23 00:46:45.604215 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-05-23 00:46:45.604225 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-05-23 00:46:45.604244 | orchestrator | 2025-05-23 00:46:45.604255 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-05-23 00:46:45.604266 | orchestrator | Friday 23 May 2025 00:46:32 +0000 (0:00:02.394) 0:00:59.417 ************ 2025-05-23 00:46:45.604277 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-05-23 00:46:45.604287 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:46:45.604298 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-05-23 00:46:45.604309 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:46:45.604319 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-05-23 00:46:45.604330 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:46:45.604340 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-05-23 00:46:45.604356 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-05-23 00:46:45.604366 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-05-23 00:46:45.604376 | orchestrator | 2025-05-23 00:46:45.604385 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-05-23 00:46:45.604394 | orchestrator | Friday 23 May 2025 00:46:36 +0000 (0:00:03.893) 0:01:03.310 ************ 2025-05-23 00:46:45.604404 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:46:45.604413 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:46:45.604422 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:46:45.604432 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:46:45.604441 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:46:45.604450 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:46:45.604460 | orchestrator | 2025-05-23 00:46:45.604469 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 00:46:45.604479 | orchestrator | testbed-node-0 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-23 00:46:45.604489 | orchestrator | testbed-node-1 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-23 00:46:45.604499 | orchestrator | testbed-node-2 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-23 00:46:45.604508 | orchestrator | testbed-node-3 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-23 00:46:45.604518 | orchestrator | testbed-node-4 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-23 00:46:45.604527 | orchestrator | testbed-node-5 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-23 00:46:45.604537 | orchestrator | 2025-05-23 00:46:45.604546 | orchestrator | 2025-05-23 00:46:45.604556 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-23 00:46:45.604565 | orchestrator | Friday 23 May 2025 00:46:45 +0000 (0:00:08.292) 0:01:11.603 ************ 2025-05-23 00:46:45.604574 | orchestrator | =============================================================================== 2025-05-23 00:46:45.604584 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 19.64s 2025-05-23 00:46:45.604593 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 9.31s 2025-05-23 00:46:45.604603 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.53s 2025-05-23 00:46:45.604612 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.96s 2025-05-23 00:46:45.604621 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.89s 2025-05-23 00:46:45.604631 | orchestrator | openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server --- 3.58s 2025-05-23 00:46:45.604640 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.40s 2025-05-23 00:46:45.604655 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.39s 2025-05-23 00:46:45.604669 | orchestrator | openvswitch : Copying over start-ovs file for openvswitch-vswitchd ------ 2.28s 2025-05-23 00:46:45.604679 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.06s 2025-05-23 00:46:45.604688 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.03s 2025-05-23 00:46:45.604697 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.92s 2025-05-23 00:46:45.604707 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.72s 2025-05-23 00:46:45.604716 | orchestrator | module-load : Load modules ---------------------------------------------- 1.67s 2025-05-23 00:46:45.604725 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.58s 2025-05-23 00:46:45.604735 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.16s 2025-05-23 00:46:45.604744 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 0.96s 2025-05-23 00:46:45.604753 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.82s 2025-05-23 00:46:45.604763 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.74s 2025-05-23 00:46:45.604773 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.57s 2025-05-23 00:46:45.604801 | orchestrator | 2025-05-23 00:46:45 | INFO  | Task 2af9f3cc-6416-434b-84cb-80c48e2df7d7 is in state SUCCESS 2025-05-23 00:46:45.604811 | orchestrator | 2025-05-23 00:46:45 | INFO  | Task 1ffd0cbb-671d-4875-a1b7-2542265b148e is in state STARTED 2025-05-23 00:46:45.604820 | orchestrator | 2025-05-23 00:46:45 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:46:48.646591 | orchestrator | 2025-05-23 00:46:48 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:46:48.647907 | orchestrator | 2025-05-23 00:46:48 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:46:48.647941 | orchestrator | 2025-05-23 00:46:48 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:46:48.648843 | orchestrator | 2025-05-23 00:46:48 | INFO  | Task 459c53a9-d620-4eba-a1bf-8932e6282396 is in state STARTED 2025-05-23 00:46:48.650122 | orchestrator | 2025-05-23 00:46:48 | INFO  | Task 1ffd0cbb-671d-4875-a1b7-2542265b148e is in state STARTED 2025-05-23 00:46:48.650166 | orchestrator | 2025-05-23 00:46:48 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:46:51.703347 | orchestrator | 2025-05-23 00:46:51 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:46:51.707601 | orchestrator | 2025-05-23 00:46:51 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:46:51.709491 | orchestrator | 2025-05-23 00:46:51 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:46:51.711795 | orchestrator | 2025-05-23 00:46:51 | INFO  | Task 459c53a9-d620-4eba-a1bf-8932e6282396 is in state STARTED 2025-05-23 00:46:51.714416 | orchestrator | 2025-05-23 00:46:51 | INFO  | Task 1ffd0cbb-671d-4875-a1b7-2542265b148e is in state STARTED 2025-05-23 00:46:51.714878 | orchestrator | 2025-05-23 00:46:51 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:46:54.764347 | orchestrator | 2025-05-23 00:46:54 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:46:54.764637 | orchestrator | 2025-05-23 00:46:54 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:46:54.765613 | orchestrator | 2025-05-23 00:46:54 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:46:54.766504 | orchestrator | 2025-05-23 00:46:54 | INFO  | Task 459c53a9-d620-4eba-a1bf-8932e6282396 is in state STARTED 2025-05-23 00:46:54.769945 | orchestrator | 2025-05-23 00:46:54 | INFO  | Task 1ffd0cbb-671d-4875-a1b7-2542265b148e is in state STARTED 2025-05-23 00:46:54.769986 | orchestrator | 2025-05-23 00:46:54 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:46:57.797444 | orchestrator | 2025-05-23 00:46:57 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:46:57.798113 | orchestrator | 2025-05-23 00:46:57 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:46:57.798828 | orchestrator | 2025-05-23 00:46:57 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:46:57.799633 | orchestrator | 2025-05-23 00:46:57 | INFO  | Task 459c53a9-d620-4eba-a1bf-8932e6282396 is in state STARTED 2025-05-23 00:46:57.800492 | orchestrator | 2025-05-23 00:46:57 | INFO  | Task 1ffd0cbb-671d-4875-a1b7-2542265b148e is in state STARTED 2025-05-23 00:46:57.800514 | orchestrator | 2025-05-23 00:46:57 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:47:00.832942 | orchestrator | 2025-05-23 00:47:00 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:47:00.834686 | orchestrator | 2025-05-23 00:47:00 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:47:00.835825 | orchestrator | 2025-05-23 00:47:00 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:47:00.837307 | orchestrator | 2025-05-23 00:47:00 | INFO  | Task 459c53a9-d620-4eba-a1bf-8932e6282396 is in state STARTED 2025-05-23 00:47:00.838363 | orchestrator | 2025-05-23 00:47:00 | INFO  | Task 1ffd0cbb-671d-4875-a1b7-2542265b148e is in state STARTED 2025-05-23 00:47:00.838549 | orchestrator | 2025-05-23 00:47:00 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:47:03.871039 | orchestrator | 2025-05-23 00:47:03 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:47:03.871345 | orchestrator | 2025-05-23 00:47:03 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:47:03.871987 | orchestrator | 2025-05-23 00:47:03 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:47:03.878821 | orchestrator | 2025-05-23 00:47:03 | INFO  | Task 459c53a9-d620-4eba-a1bf-8932e6282396 is in state STARTED 2025-05-23 00:47:03.878865 | orchestrator | 2025-05-23 00:47:03 | INFO  | Task 1ffd0cbb-671d-4875-a1b7-2542265b148e is in state STARTED 2025-05-23 00:47:03.878878 | orchestrator | 2025-05-23 00:47:03 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:47:06.923038 | orchestrator | 2025-05-23 00:47:06 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:47:06.923141 | orchestrator | 2025-05-23 00:47:06 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:47:06.923966 | orchestrator | 2025-05-23 00:47:06 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:47:06.927122 | orchestrator | 2025-05-23 00:47:06 | INFO  | Task 459c53a9-d620-4eba-a1bf-8932e6282396 is in state STARTED 2025-05-23 00:47:06.927958 | orchestrator | 2025-05-23 00:47:06 | INFO  | Task 1ffd0cbb-671d-4875-a1b7-2542265b148e is in state STARTED 2025-05-23 00:47:06.928211 | orchestrator | 2025-05-23 00:47:06 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:47:09.967816 | orchestrator | 2025-05-23 00:47:09 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:47:09.968047 | orchestrator | 2025-05-23 00:47:09 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:47:09.968544 | orchestrator | 2025-05-23 00:47:09 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:47:09.968987 | orchestrator | 2025-05-23 00:47:09 | INFO  | Task 459c53a9-d620-4eba-a1bf-8932e6282396 is in state STARTED 2025-05-23 00:47:09.971888 | orchestrator | 2025-05-23 00:47:09 | INFO  | Task 1ffd0cbb-671d-4875-a1b7-2542265b148e is in state STARTED 2025-05-23 00:47:09.971927 | orchestrator | 2025-05-23 00:47:09 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:47:13.005674 | orchestrator | 2025-05-23 00:47:13 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:47:13.010193 | orchestrator | 2025-05-23 00:47:13 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:47:13.012394 | orchestrator | 2025-05-23 00:47:13 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:47:13.015322 | orchestrator | 2025-05-23 00:47:13 | INFO  | Task 459c53a9-d620-4eba-a1bf-8932e6282396 is in state STARTED 2025-05-23 00:47:13.017280 | orchestrator | 2025-05-23 00:47:13 | INFO  | Task 1ffd0cbb-671d-4875-a1b7-2542265b148e is in state STARTED 2025-05-23 00:47:13.017932 | orchestrator | 2025-05-23 00:47:13 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:47:16.078116 | orchestrator | 2025-05-23 00:47:16 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:47:16.079863 | orchestrator | 2025-05-23 00:47:16 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:47:16.083038 | orchestrator | 2025-05-23 00:47:16 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:47:16.084849 | orchestrator | 2025-05-23 00:47:16 | INFO  | Task 459c53a9-d620-4eba-a1bf-8932e6282396 is in state STARTED 2025-05-23 00:47:16.086174 | orchestrator | 2025-05-23 00:47:16 | INFO  | Task 1ffd0cbb-671d-4875-a1b7-2542265b148e is in state STARTED 2025-05-23 00:47:16.086510 | orchestrator | 2025-05-23 00:47:16 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:47:19.148751 | orchestrator | 2025-05-23 00:47:19 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:47:19.150702 | orchestrator | 2025-05-23 00:47:19 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:47:19.154269 | orchestrator | 2025-05-23 00:47:19 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:47:19.155172 | orchestrator | 2025-05-23 00:47:19 | INFO  | Task 459c53a9-d620-4eba-a1bf-8932e6282396 is in state STARTED 2025-05-23 00:47:19.155665 | orchestrator | 2025-05-23 00:47:19 | INFO  | Task 1ffd0cbb-671d-4875-a1b7-2542265b148e is in state STARTED 2025-05-23 00:47:19.155905 | orchestrator | 2025-05-23 00:47:19 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:47:22.208190 | orchestrator | 2025-05-23 00:47:22 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:47:22.208301 | orchestrator | 2025-05-23 00:47:22 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:47:22.209029 | orchestrator | 2025-05-23 00:47:22 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:47:22.209851 | orchestrator | 2025-05-23 00:47:22 | INFO  | Task 459c53a9-d620-4eba-a1bf-8932e6282396 is in state STARTED 2025-05-23 00:47:22.210581 | orchestrator | 2025-05-23 00:47:22 | INFO  | Task 1ffd0cbb-671d-4875-a1b7-2542265b148e is in state STARTED 2025-05-23 00:47:22.210604 | orchestrator | 2025-05-23 00:47:22 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:47:25.266360 | orchestrator | 2025-05-23 00:47:25 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:47:25.266459 | orchestrator | 2025-05-23 00:47:25 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:47:25.266738 | orchestrator | 2025-05-23 00:47:25 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:47:25.274458 | orchestrator | 2025-05-23 00:47:25 | INFO  | Task 459c53a9-d620-4eba-a1bf-8932e6282396 is in state STARTED 2025-05-23 00:47:25.274691 | orchestrator | 2025-05-23 00:47:25 | INFO  | Task 1ffd0cbb-671d-4875-a1b7-2542265b148e is in state STARTED 2025-05-23 00:47:25.274714 | orchestrator | 2025-05-23 00:47:25 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:47:28.334671 | orchestrator | 2025-05-23 00:47:28 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:47:28.337022 | orchestrator | 2025-05-23 00:47:28 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:47:28.338070 | orchestrator | 2025-05-23 00:47:28 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:47:28.339092 | orchestrator | 2025-05-23 00:47:28 | INFO  | Task 459c53a9-d620-4eba-a1bf-8932e6282396 is in state STARTED 2025-05-23 00:47:28.340854 | orchestrator | 2025-05-23 00:47:28 | INFO  | Task 1ffd0cbb-671d-4875-a1b7-2542265b148e is in state STARTED 2025-05-23 00:47:28.341147 | orchestrator | 2025-05-23 00:47:28 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:47:31.405737 | orchestrator | 2025-05-23 00:47:31 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:47:31.406373 | orchestrator | 2025-05-23 00:47:31 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:47:31.407854 | orchestrator | 2025-05-23 00:47:31 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:47:31.409055 | orchestrator | 2025-05-23 00:47:31 | INFO  | Task 459c53a9-d620-4eba-a1bf-8932e6282396 is in state STARTED 2025-05-23 00:47:31.410678 | orchestrator | 2025-05-23 00:47:31 | INFO  | Task 1ffd0cbb-671d-4875-a1b7-2542265b148e is in state STARTED 2025-05-23 00:47:31.410722 | orchestrator | 2025-05-23 00:47:31 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:47:34.453686 | orchestrator | 2025-05-23 00:47:34 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:47:34.454200 | orchestrator | 2025-05-23 00:47:34 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:47:34.454856 | orchestrator | 2025-05-23 00:47:34 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:47:34.455619 | orchestrator | 2025-05-23 00:47:34 | INFO  | Task 459c53a9-d620-4eba-a1bf-8932e6282396 is in state STARTED 2025-05-23 00:47:34.456435 | orchestrator | 2025-05-23 00:47:34 | INFO  | Task 1ffd0cbb-671d-4875-a1b7-2542265b148e is in state STARTED 2025-05-23 00:47:34.456486 | orchestrator | 2025-05-23 00:47:34 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:47:37.495653 | orchestrator | 2025-05-23 00:47:37 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:47:37.499293 | orchestrator | 2025-05-23 00:47:37 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:47:37.502083 | orchestrator | 2025-05-23 00:47:37 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:47:37.504780 | orchestrator | 2025-05-23 00:47:37 | INFO  | Task 459c53a9-d620-4eba-a1bf-8932e6282396 is in state STARTED 2025-05-23 00:47:37.505098 | orchestrator | 2025-05-23 00:47:37 | INFO  | Task 1ffd0cbb-671d-4875-a1b7-2542265b148e is in state STARTED 2025-05-23 00:47:37.505568 | orchestrator | 2025-05-23 00:47:37 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:47:40.547081 | orchestrator | 2025-05-23 00:47:40 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:47:40.548915 | orchestrator | 2025-05-23 00:47:40 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:47:40.552872 | orchestrator | 2025-05-23 00:47:40 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:47:40.554646 | orchestrator | 2025-05-23 00:47:40 | INFO  | Task 459c53a9-d620-4eba-a1bf-8932e6282396 is in state STARTED 2025-05-23 00:47:40.555882 | orchestrator | 2025-05-23 00:47:40 | INFO  | Task 1ffd0cbb-671d-4875-a1b7-2542265b148e is in state STARTED 2025-05-23 00:47:40.555946 | orchestrator | 2025-05-23 00:47:40 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:47:43.602328 | orchestrator | 2025-05-23 00:47:43 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:47:43.602599 | orchestrator | 2025-05-23 00:47:43 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:47:43.604561 | orchestrator | 2025-05-23 00:47:43 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:47:43.605277 | orchestrator | 2025-05-23 00:47:43 | INFO  | Task 459c53a9-d620-4eba-a1bf-8932e6282396 is in state STARTED 2025-05-23 00:47:43.606090 | orchestrator | 2025-05-23 00:47:43 | INFO  | Task 1ffd0cbb-671d-4875-a1b7-2542265b148e is in state STARTED 2025-05-23 00:47:43.606215 | orchestrator | 2025-05-23 00:47:43 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:47:46.665332 | orchestrator | 2025-05-23 00:47:46 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:47:46.666739 | orchestrator | 2025-05-23 00:47:46 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:47:46.668241 | orchestrator | 2025-05-23 00:47:46 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:47:46.669603 | orchestrator | 2025-05-23 00:47:46 | INFO  | Task 459c53a9-d620-4eba-a1bf-8932e6282396 is in state STARTED 2025-05-23 00:47:46.670864 | orchestrator | 2025-05-23 00:47:46 | INFO  | Task 1ffd0cbb-671d-4875-a1b7-2542265b148e is in state STARTED 2025-05-23 00:47:46.670896 | orchestrator | 2025-05-23 00:47:46 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:47:49.713208 | orchestrator | 2025-05-23 00:47:49 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:47:49.715874 | orchestrator | 2025-05-23 00:47:49 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:47:49.716330 | orchestrator | 2025-05-23 00:47:49 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:47:49.717859 | orchestrator | 2025-05-23 00:47:49 | INFO  | Task 459c53a9-d620-4eba-a1bf-8932e6282396 is in state STARTED 2025-05-23 00:47:49.717886 | orchestrator | 2025-05-23 00:47:49 | INFO  | Task 1ffd0cbb-671d-4875-a1b7-2542265b148e is in state STARTED 2025-05-23 00:47:49.717898 | orchestrator | 2025-05-23 00:47:49 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:47:52.753114 | orchestrator | 2025-05-23 00:47:52 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:47:52.753188 | orchestrator | 2025-05-23 00:47:52 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:47:52.753220 | orchestrator | 2025-05-23 00:47:52 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:47:52.753229 | orchestrator | 2025-05-23 00:47:52 | INFO  | Task 459c53a9-d620-4eba-a1bf-8932e6282396 is in state STARTED 2025-05-23 00:47:52.753247 | orchestrator | 2025-05-23 00:47:52 | INFO  | Task 1ffd0cbb-671d-4875-a1b7-2542265b148e is in state STARTED 2025-05-23 00:47:52.753256 | orchestrator | 2025-05-23 00:47:52 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:47:55.779482 | orchestrator | 2025-05-23 00:47:55 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:47:55.781000 | orchestrator | 2025-05-23 00:47:55 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:47:55.781232 | orchestrator | 2025-05-23 00:47:55 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:47:55.781917 | orchestrator | 2025-05-23 00:47:55 | INFO  | Task 459c53a9-d620-4eba-a1bf-8932e6282396 is in state STARTED 2025-05-23 00:47:55.782289 | orchestrator | 2025-05-23 00:47:55 | INFO  | Task 1ffd0cbb-671d-4875-a1b7-2542265b148e is in state STARTED 2025-05-23 00:47:55.782313 | orchestrator | 2025-05-23 00:47:55 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:47:58.817280 | orchestrator | 2025-05-23 00:47:58 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:47:58.817486 | orchestrator | 2025-05-23 00:47:58 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:47:58.818406 | orchestrator | 2025-05-23 00:47:58 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:47:58.819111 | orchestrator | 2025-05-23 00:47:58 | INFO  | Task 459c53a9-d620-4eba-a1bf-8932e6282396 is in state STARTED 2025-05-23 00:47:58.819929 | orchestrator | 2025-05-23 00:47:58 | INFO  | Task 1ffd0cbb-671d-4875-a1b7-2542265b148e is in state STARTED 2025-05-23 00:47:58.819951 | orchestrator | 2025-05-23 00:47:58 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:48:01.865087 | orchestrator | 2025-05-23 00:48:01 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:48:01.866319 | orchestrator | 2025-05-23 00:48:01 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:48:01.866352 | orchestrator | 2025-05-23 00:48:01 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:48:01.867685 | orchestrator | 2025-05-23 00:48:01 | INFO  | Task 459c53a9-d620-4eba-a1bf-8932e6282396 is in state STARTED 2025-05-23 00:48:01.867707 | orchestrator | 2025-05-23 00:48:01 | INFO  | Task 1ffd0cbb-671d-4875-a1b7-2542265b148e is in state STARTED 2025-05-23 00:48:01.867718 | orchestrator | 2025-05-23 00:48:01 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:48:04.910960 | orchestrator | 2025-05-23 00:48:04 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:48:04.911356 | orchestrator | 2025-05-23 00:48:04 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:48:04.912244 | orchestrator | 2025-05-23 00:48:04 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:48:04.912625 | orchestrator | 2025-05-23 00:48:04 | INFO  | Task 459c53a9-d620-4eba-a1bf-8932e6282396 is in state STARTED 2025-05-23 00:48:04.913490 | orchestrator | 2025-05-23 00:48:04 | INFO  | Task 1ffd0cbb-671d-4875-a1b7-2542265b148e is in state STARTED 2025-05-23 00:48:04.913509 | orchestrator | 2025-05-23 00:48:04 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:48:07.953109 | orchestrator | 2025-05-23 00:48:07 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:48:07.953631 | orchestrator | 2025-05-23 00:48:07 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:48:07.955673 | orchestrator | 2025-05-23 00:48:07 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:48:07.957286 | orchestrator | 2025-05-23 00:48:07 | INFO  | Task 459c53a9-d620-4eba-a1bf-8932e6282396 is in state STARTED 2025-05-23 00:48:07.958787 | orchestrator | 2025-05-23 00:48:07 | INFO  | Task 1ffd0cbb-671d-4875-a1b7-2542265b148e is in state SUCCESS 2025-05-23 00:48:07.959772 | orchestrator | 2025-05-23 00:48:07.959801 | orchestrator | 2025-05-23 00:48:07.959813 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-05-23 00:48:07.959824 | orchestrator | 2025-05-23 00:48:07.959835 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-05-23 00:48:07.959846 | orchestrator | Friday 23 May 2025 00:45:57 +0000 (0:00:00.353) 0:00:00.353 ************ 2025-05-23 00:48:07.959857 | orchestrator | ok: [localhost] => { 2025-05-23 00:48:07.959869 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-05-23 00:48:07.959913 | orchestrator | } 2025-05-23 00:48:07.959924 | orchestrator | 2025-05-23 00:48:07.959935 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-05-23 00:48:07.959959 | orchestrator | Friday 23 May 2025 00:45:57 +0000 (0:00:00.065) 0:00:00.418 ************ 2025-05-23 00:48:07.959971 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-05-23 00:48:07.959983 | orchestrator | ...ignoring 2025-05-23 00:48:07.959994 | orchestrator | 2025-05-23 00:48:07.960005 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-05-23 00:48:07.960015 | orchestrator | Friday 23 May 2025 00:45:59 +0000 (0:00:02.383) 0:00:02.802 ************ 2025-05-23 00:48:07.960026 | orchestrator | skipping: [localhost] 2025-05-23 00:48:07.960037 | orchestrator | 2025-05-23 00:48:07.960048 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-05-23 00:48:07.960058 | orchestrator | Friday 23 May 2025 00:45:59 +0000 (0:00:00.065) 0:00:02.868 ************ 2025-05-23 00:48:07.960069 | orchestrator | ok: [localhost] 2025-05-23 00:48:07.960079 | orchestrator | 2025-05-23 00:48:07.960090 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-23 00:48:07.960101 | orchestrator | 2025-05-23 00:48:07.960111 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-23 00:48:07.960122 | orchestrator | Friday 23 May 2025 00:45:59 +0000 (0:00:00.223) 0:00:03.091 ************ 2025-05-23 00:48:07.960132 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:48:07.960143 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:48:07.960154 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:48:07.960164 | orchestrator | 2025-05-23 00:48:07.960175 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-23 00:48:07.960186 | orchestrator | Friday 23 May 2025 00:46:00 +0000 (0:00:00.879) 0:00:03.970 ************ 2025-05-23 00:48:07.960197 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-05-23 00:48:07.960208 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-05-23 00:48:07.960219 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-05-23 00:48:07.960230 | orchestrator | 2025-05-23 00:48:07.960240 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-05-23 00:48:07.960251 | orchestrator | 2025-05-23 00:48:07.960261 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-23 00:48:07.960272 | orchestrator | Friday 23 May 2025 00:46:01 +0000 (0:00:00.641) 0:00:04.612 ************ 2025-05-23 00:48:07.960283 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:48:07.960308 | orchestrator | 2025-05-23 00:48:07.960319 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-05-23 00:48:07.960330 | orchestrator | Friday 23 May 2025 00:46:02 +0000 (0:00:00.815) 0:00:05.428 ************ 2025-05-23 00:48:07.960340 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:48:07.960351 | orchestrator | 2025-05-23 00:48:07.960362 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-05-23 00:48:07.960374 | orchestrator | Friday 23 May 2025 00:46:03 +0000 (0:00:01.025) 0:00:06.453 ************ 2025-05-23 00:48:07.960387 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:48:07.960399 | orchestrator | 2025-05-23 00:48:07.960412 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-05-23 00:48:07.960424 | orchestrator | Friday 23 May 2025 00:46:03 +0000 (0:00:00.378) 0:00:06.832 ************ 2025-05-23 00:48:07.960436 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:48:07.960449 | orchestrator | 2025-05-23 00:48:07.960461 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-05-23 00:48:07.960473 | orchestrator | Friday 23 May 2025 00:46:04 +0000 (0:00:00.506) 0:00:07.338 ************ 2025-05-23 00:48:07.960486 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:48:07.960498 | orchestrator | 2025-05-23 00:48:07.960510 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-05-23 00:48:07.960523 | orchestrator | Friday 23 May 2025 00:46:04 +0000 (0:00:00.306) 0:00:07.645 ************ 2025-05-23 00:48:07.960535 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:48:07.960547 | orchestrator | 2025-05-23 00:48:07.960559 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-23 00:48:07.960571 | orchestrator | Friday 23 May 2025 00:46:04 +0000 (0:00:00.266) 0:00:07.911 ************ 2025-05-23 00:48:07.960583 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:48:07.960595 | orchestrator | 2025-05-23 00:48:07.960607 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-05-23 00:48:07.960620 | orchestrator | Friday 23 May 2025 00:46:05 +0000 (0:00:00.836) 0:00:08.748 ************ 2025-05-23 00:48:07.960632 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:48:07.960644 | orchestrator | 2025-05-23 00:48:07.960657 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-05-23 00:48:07.960670 | orchestrator | Friday 23 May 2025 00:46:06 +0000 (0:00:00.814) 0:00:09.562 ************ 2025-05-23 00:48:07.960682 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:48:07.960696 | orchestrator | 2025-05-23 00:48:07.960708 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-05-23 00:48:07.960721 | orchestrator | Friday 23 May 2025 00:46:06 +0000 (0:00:00.315) 0:00:09.877 ************ 2025-05-23 00:48:07.960789 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:48:07.960803 | orchestrator | 2025-05-23 00:48:07.960824 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-05-23 00:48:07.960835 | orchestrator | Friday 23 May 2025 00:46:06 +0000 (0:00:00.289) 0:00:10.167 ************ 2025-05-23 00:48:07.960857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-23 00:48:07.960882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-23 00:48:07.960895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-23 00:48:07.960908 | orchestrator | 2025-05-23 00:48:07.960919 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-05-23 00:48:07.960930 | orchestrator | Friday 23 May 2025 00:46:07 +0000 (0:00:00.867) 0:00:11.034 ************ 2025-05-23 00:48:07.960950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-23 00:48:07.960968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-23 00:48:07.960987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-23 00:48:07.960999 | orchestrator | 2025-05-23 00:48:07.961010 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-05-23 00:48:07.961021 | orchestrator | Friday 23 May 2025 00:46:09 +0000 (0:00:01.849) 0:00:12.884 ************ 2025-05-23 00:48:07.961032 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-23 00:48:07.961043 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-23 00:48:07.961054 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-23 00:48:07.961065 | orchestrator | 2025-05-23 00:48:07.961075 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-05-23 00:48:07.961086 | orchestrator | Friday 23 May 2025 00:46:11 +0000 (0:00:01.964) 0:00:14.848 ************ 2025-05-23 00:48:07.961097 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-23 00:48:07.961107 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-23 00:48:07.961118 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-23 00:48:07.961128 | orchestrator | 2025-05-23 00:48:07.961139 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-05-23 00:48:07.961150 | orchestrator | Friday 23 May 2025 00:46:14 +0000 (0:00:03.028) 0:00:17.877 ************ 2025-05-23 00:48:07.961160 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-23 00:48:07.961171 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-23 00:48:07.961181 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-23 00:48:07.961192 | orchestrator | 2025-05-23 00:48:07.961208 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-05-23 00:48:07.961220 | orchestrator | Friday 23 May 2025 00:46:15 +0000 (0:00:01.335) 0:00:19.212 ************ 2025-05-23 00:48:07.961230 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-23 00:48:07.961247 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-23 00:48:07.961258 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-23 00:48:07.961269 | orchestrator | 2025-05-23 00:48:07.961280 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-05-23 00:48:07.961294 | orchestrator | Friday 23 May 2025 00:46:17 +0000 (0:00:01.646) 0:00:20.859 ************ 2025-05-23 00:48:07.961304 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-23 00:48:07.961314 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-23 00:48:07.961324 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-23 00:48:07.961333 | orchestrator | 2025-05-23 00:48:07.961343 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-05-23 00:48:07.961353 | orchestrator | Friday 23 May 2025 00:46:18 +0000 (0:00:01.394) 0:00:22.253 ************ 2025-05-23 00:48:07.961362 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-23 00:48:07.961372 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-23 00:48:07.961381 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-23 00:48:07.961391 | orchestrator | 2025-05-23 00:48:07.961400 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-23 00:48:07.961410 | orchestrator | Friday 23 May 2025 00:46:20 +0000 (0:00:01.984) 0:00:24.238 ************ 2025-05-23 00:48:07.961419 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:48:07.961429 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:48:07.961438 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:48:07.961448 | orchestrator | 2025-05-23 00:48:07.961457 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-05-23 00:48:07.961467 | orchestrator | Friday 23 May 2025 00:46:21 +0000 (0:00:00.858) 0:00:25.096 ************ 2025-05-23 00:48:07.961477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-23 00:48:07.961488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-23 00:48:07.961515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-23 00:48:07.961527 | orchestrator | 2025-05-23 00:48:07.961536 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-05-23 00:48:07.961546 | orchestrator | Friday 23 May 2025 00:46:23 +0000 (0:00:02.031) 0:00:27.127 ************ 2025-05-23 00:48:07.961555 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:48:07.961565 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:48:07.961574 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:48:07.961584 | orchestrator | 2025-05-23 00:48:07.961593 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-05-23 00:48:07.961603 | orchestrator | Friday 23 May 2025 00:46:24 +0000 (0:00:00.909) 0:00:28.037 ************ 2025-05-23 00:48:07.961612 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:48:07.961622 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:48:07.961631 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:48:07.961641 | orchestrator | 2025-05-23 00:48:07.961650 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-05-23 00:48:07.961660 | orchestrator | Friday 23 May 2025 00:46:31 +0000 (0:00:06.367) 0:00:34.404 ************ 2025-05-23 00:48:07.961669 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:48:07.961679 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:48:07.961688 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:48:07.961698 | orchestrator | 2025-05-23 00:48:07.961707 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-23 00:48:07.961716 | orchestrator | 2025-05-23 00:48:07.961726 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-23 00:48:07.961759 | orchestrator | Friday 23 May 2025 00:46:31 +0000 (0:00:00.391) 0:00:34.796 ************ 2025-05-23 00:48:07.961777 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:48:07.961793 | orchestrator | 2025-05-23 00:48:07.961806 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-23 00:48:07.961816 | orchestrator | Friday 23 May 2025 00:46:32 +0000 (0:00:00.954) 0:00:35.750 ************ 2025-05-23 00:48:07.961825 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:48:07.961835 | orchestrator | 2025-05-23 00:48:07.961844 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-23 00:48:07.961854 | orchestrator | Friday 23 May 2025 00:46:32 +0000 (0:00:00.232) 0:00:35.983 ************ 2025-05-23 00:48:07.961863 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:48:07.961873 | orchestrator | 2025-05-23 00:48:07.961882 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-23 00:48:07.961891 | orchestrator | Friday 23 May 2025 00:46:34 +0000 (0:00:01.771) 0:00:37.754 ************ 2025-05-23 00:48:07.961907 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:48:07.961917 | orchestrator | 2025-05-23 00:48:07.961926 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-23 00:48:07.961935 | orchestrator | 2025-05-23 00:48:07.961945 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-23 00:48:07.961954 | orchestrator | Friday 23 May 2025 00:47:27 +0000 (0:00:53.481) 0:01:31.236 ************ 2025-05-23 00:48:07.961964 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:48:07.961973 | orchestrator | 2025-05-23 00:48:07.961982 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-23 00:48:07.961992 | orchestrator | Friday 23 May 2025 00:47:28 +0000 (0:00:00.780) 0:01:32.016 ************ 2025-05-23 00:48:07.962002 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:48:07.962011 | orchestrator | 2025-05-23 00:48:07.962067 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-23 00:48:07.962077 | orchestrator | Friday 23 May 2025 00:47:28 +0000 (0:00:00.237) 0:01:32.254 ************ 2025-05-23 00:48:07.962087 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:48:07.962096 | orchestrator | 2025-05-23 00:48:07.962105 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-23 00:48:07.962115 | orchestrator | Friday 23 May 2025 00:47:36 +0000 (0:00:07.266) 0:01:39.520 ************ 2025-05-23 00:48:07.962124 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:48:07.962134 | orchestrator | 2025-05-23 00:48:07.962143 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-23 00:48:07.962152 | orchestrator | 2025-05-23 00:48:07.962162 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-23 00:48:07.962171 | orchestrator | Friday 23 May 2025 00:47:46 +0000 (0:00:09.834) 0:01:49.354 ************ 2025-05-23 00:48:07.962181 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:48:07.962190 | orchestrator | 2025-05-23 00:48:07.962199 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-23 00:48:07.962209 | orchestrator | Friday 23 May 2025 00:47:46 +0000 (0:00:00.601) 0:01:49.956 ************ 2025-05-23 00:48:07.962218 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:48:07.962228 | orchestrator | 2025-05-23 00:48:07.962238 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-23 00:48:07.962254 | orchestrator | Friday 23 May 2025 00:47:46 +0000 (0:00:00.207) 0:01:50.164 ************ 2025-05-23 00:48:07.962264 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:48:07.962274 | orchestrator | 2025-05-23 00:48:07.962283 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-23 00:48:07.962293 | orchestrator | Friday 23 May 2025 00:47:48 +0000 (0:00:01.777) 0:01:51.941 ************ 2025-05-23 00:48:07.962302 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:48:07.962312 | orchestrator | 2025-05-23 00:48:07.962321 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-05-23 00:48:07.962331 | orchestrator | 2025-05-23 00:48:07.962340 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-05-23 00:48:07.962350 | orchestrator | Friday 23 May 2025 00:48:02 +0000 (0:00:13.736) 0:02:05.677 ************ 2025-05-23 00:48:07.962359 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:48:07.962369 | orchestrator | 2025-05-23 00:48:07.962378 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-05-23 00:48:07.962388 | orchestrator | Friday 23 May 2025 00:48:03 +0000 (0:00:01.095) 0:02:06.772 ************ 2025-05-23 00:48:07.962397 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-23 00:48:07.962407 | orchestrator | enable_outward_rabbitmq_True 2025-05-23 00:48:07.962417 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-23 00:48:07.962426 | orchestrator | outward_rabbitmq_restart 2025-05-23 00:48:07.962436 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:48:07.962451 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:48:07.962461 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:48:07.962470 | orchestrator | 2025-05-23 00:48:07.962480 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-05-23 00:48:07.962489 | orchestrator | skipping: no hosts matched 2025-05-23 00:48:07.962499 | orchestrator | 2025-05-23 00:48:07.962508 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-05-23 00:48:07.962518 | orchestrator | skipping: no hosts matched 2025-05-23 00:48:07.962527 | orchestrator | 2025-05-23 00:48:07.962536 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-05-23 00:48:07.962546 | orchestrator | skipping: no hosts matched 2025-05-23 00:48:07.962555 | orchestrator | 2025-05-23 00:48:07.962565 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 00:48:07.962575 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-05-23 00:48:07.962584 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-23 00:48:07.962594 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-23 00:48:07.962604 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-23 00:48:07.962614 | orchestrator | 2025-05-23 00:48:07.962623 | orchestrator | 2025-05-23 00:48:07.962633 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-23 00:48:07.962642 | orchestrator | Friday 23 May 2025 00:48:06 +0000 (0:00:02.599) 0:02:09.372 ************ 2025-05-23 00:48:07.962652 | orchestrator | =============================================================================== 2025-05-23 00:48:07.962661 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 77.05s 2025-05-23 00:48:07.962671 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.81s 2025-05-23 00:48:07.962680 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 6.37s 2025-05-23 00:48:07.963240 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.03s 2025-05-23 00:48:07.963256 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.60s 2025-05-23 00:48:07.963266 | orchestrator | Check RabbitMQ service -------------------------------------------------- 2.38s 2025-05-23 00:48:07.963276 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.34s 2025-05-23 00:48:07.963286 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 2.03s 2025-05-23 00:48:07.963298 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.98s 2025-05-23 00:48:07.963308 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.96s 2025-05-23 00:48:07.963318 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.85s 2025-05-23 00:48:07.963327 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.65s 2025-05-23 00:48:07.963336 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.39s 2025-05-23 00:48:07.963346 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.34s 2025-05-23 00:48:07.963355 | orchestrator | Include rabbitmq post-deploy.yml ---------------------------------------- 1.10s 2025-05-23 00:48:07.963365 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.03s 2025-05-23 00:48:07.963374 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.91s 2025-05-23 00:48:07.963383 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.88s 2025-05-23 00:48:07.963393 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.87s 2025-05-23 00:48:07.963409 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.86s 2025-05-23 00:48:07.963418 | orchestrator | 2025-05-23 00:48:07 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:48:10.989186 | orchestrator | 2025-05-23 00:48:10 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:48:10.991345 | orchestrator | 2025-05-23 00:48:10 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:48:10.992811 | orchestrator | 2025-05-23 00:48:10 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:48:10.993984 | orchestrator | 2025-05-23 00:48:10 | INFO  | Task 459c53a9-d620-4eba-a1bf-8932e6282396 is in state STARTED 2025-05-23 00:48:10.994238 | orchestrator | 2025-05-23 00:48:10 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:48:14.040905 | orchestrator | 2025-05-23 00:48:14 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:48:14.041029 | orchestrator | 2025-05-23 00:48:14 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:48:14.042854 | orchestrator | 2025-05-23 00:48:14 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:48:14.044470 | orchestrator | 2025-05-23 00:48:14 | INFO  | Task 459c53a9-d620-4eba-a1bf-8932e6282396 is in state STARTED 2025-05-23 00:48:14.044571 | orchestrator | 2025-05-23 00:48:14 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:48:17.086241 | orchestrator | 2025-05-23 00:48:17 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:48:17.086468 | orchestrator | 2025-05-23 00:48:17 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:48:17.087456 | orchestrator | 2025-05-23 00:48:17 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:48:17.091598 | orchestrator | 2025-05-23 00:48:17 | INFO  | Task 459c53a9-d620-4eba-a1bf-8932e6282396 is in state STARTED 2025-05-23 00:48:17.091675 | orchestrator | 2025-05-23 00:48:17 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:48:20.139093 | orchestrator | 2025-05-23 00:48:20 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:48:20.139697 | orchestrator | 2025-05-23 00:48:20 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:48:20.142393 | orchestrator | 2025-05-23 00:48:20 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:48:20.142924 | orchestrator | 2025-05-23 00:48:20 | INFO  | Task 459c53a9-d620-4eba-a1bf-8932e6282396 is in state STARTED 2025-05-23 00:48:20.142949 | orchestrator | 2025-05-23 00:48:20 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:48:23.202495 | orchestrator | 2025-05-23 00:48:23 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:48:23.204174 | orchestrator | 2025-05-23 00:48:23 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:48:23.205259 | orchestrator | 2025-05-23 00:48:23 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:48:23.206986 | orchestrator | 2025-05-23 00:48:23 | INFO  | Task 459c53a9-d620-4eba-a1bf-8932e6282396 is in state STARTED 2025-05-23 00:48:23.207016 | orchestrator | 2025-05-23 00:48:23 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:48:26.254328 | orchestrator | 2025-05-23 00:48:26 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:48:26.256611 | orchestrator | 2025-05-23 00:48:26 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:48:26.256669 | orchestrator | 2025-05-23 00:48:26 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:48:26.261145 | orchestrator | 2025-05-23 00:48:26 | INFO  | Task 459c53a9-d620-4eba-a1bf-8932e6282396 is in state STARTED 2025-05-23 00:48:26.261238 | orchestrator | 2025-05-23 00:48:26 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:48:29.314433 | orchestrator | 2025-05-23 00:48:29 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:48:29.314626 | orchestrator | 2025-05-23 00:48:29 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:48:29.315202 | orchestrator | 2025-05-23 00:48:29 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:48:29.316066 | orchestrator | 2025-05-23 00:48:29 | INFO  | Task 459c53a9-d620-4eba-a1bf-8932e6282396 is in state STARTED 2025-05-23 00:48:29.316092 | orchestrator | 2025-05-23 00:48:29 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:48:32.372837 | orchestrator | 2025-05-23 00:48:32 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:48:32.373126 | orchestrator | 2025-05-23 00:48:32 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:48:32.374260 | orchestrator | 2025-05-23 00:48:32 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:48:32.374946 | orchestrator | 2025-05-23 00:48:32 | INFO  | Task 459c53a9-d620-4eba-a1bf-8932e6282396 is in state STARTED 2025-05-23 00:48:32.375387 | orchestrator | 2025-05-23 00:48:32 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:48:35.427683 | orchestrator | 2025-05-23 00:48:35 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:48:35.429851 | orchestrator | 2025-05-23 00:48:35 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:48:35.433801 | orchestrator | 2025-05-23 00:48:35 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:48:35.437235 | orchestrator | 2025-05-23 00:48:35 | INFO  | Task 459c53a9-d620-4eba-a1bf-8932e6282396 is in state STARTED 2025-05-23 00:48:35.437267 | orchestrator | 2025-05-23 00:48:35 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:48:38.491180 | orchestrator | 2025-05-23 00:48:38 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:48:38.491785 | orchestrator | 2025-05-23 00:48:38 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:48:38.492987 | orchestrator | 2025-05-23 00:48:38 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:48:38.494327 | orchestrator | 2025-05-23 00:48:38 | INFO  | Task 459c53a9-d620-4eba-a1bf-8932e6282396 is in state STARTED 2025-05-23 00:48:38.494368 | orchestrator | 2025-05-23 00:48:38 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:48:41.548201 | orchestrator | 2025-05-23 00:48:41 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:48:41.552074 | orchestrator | 2025-05-23 00:48:41 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:48:41.552112 | orchestrator | 2025-05-23 00:48:41 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:48:41.553965 | orchestrator | 2025-05-23 00:48:41 | INFO  | Task 459c53a9-d620-4eba-a1bf-8932e6282396 is in state STARTED 2025-05-23 00:48:41.554082 | orchestrator | 2025-05-23 00:48:41 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:48:44.599266 | orchestrator | 2025-05-23 00:48:44 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:48:44.602439 | orchestrator | 2025-05-23 00:48:44 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:48:44.605419 | orchestrator | 2025-05-23 00:48:44 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:48:44.607023 | orchestrator | 2025-05-23 00:48:44 | INFO  | Task 459c53a9-d620-4eba-a1bf-8932e6282396 is in state STARTED 2025-05-23 00:48:44.607162 | orchestrator | 2025-05-23 00:48:44 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:48:47.649367 | orchestrator | 2025-05-23 00:48:47 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:48:47.650611 | orchestrator | 2025-05-23 00:48:47 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:48:47.654165 | orchestrator | 2025-05-23 00:48:47 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:48:47.655239 | orchestrator | 2025-05-23 00:48:47 | INFO  | Task 459c53a9-d620-4eba-a1bf-8932e6282396 is in state STARTED 2025-05-23 00:48:47.655264 | orchestrator | 2025-05-23 00:48:47 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:48:50.700678 | orchestrator | 2025-05-23 00:48:50 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:48:50.703375 | orchestrator | 2025-05-23 00:48:50 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:48:50.706466 | orchestrator | 2025-05-23 00:48:50 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:48:50.708670 | orchestrator | 2025-05-23 00:48:50 | INFO  | Task 459c53a9-d620-4eba-a1bf-8932e6282396 is in state STARTED 2025-05-23 00:48:50.708697 | orchestrator | 2025-05-23 00:48:50 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:48:53.757666 | orchestrator | 2025-05-23 00:48:53 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:48:53.759964 | orchestrator | 2025-05-23 00:48:53 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:48:53.761096 | orchestrator | 2025-05-23 00:48:53 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:48:53.762274 | orchestrator | 2025-05-23 00:48:53 | INFO  | Task 459c53a9-d620-4eba-a1bf-8932e6282396 is in state STARTED 2025-05-23 00:48:53.762301 | orchestrator | 2025-05-23 00:48:53 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:48:56.804836 | orchestrator | 2025-05-23 00:48:56 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:48:56.807023 | orchestrator | 2025-05-23 00:48:56 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:48:56.808493 | orchestrator | 2025-05-23 00:48:56 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:48:56.810546 | orchestrator | 2025-05-23 00:48:56 | INFO  | Task 459c53a9-d620-4eba-a1bf-8932e6282396 is in state STARTED 2025-05-23 00:48:56.810580 | orchestrator | 2025-05-23 00:48:56 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:48:59.862950 | orchestrator | 2025-05-23 00:48:59 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:48:59.864425 | orchestrator | 2025-05-23 00:48:59 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:48:59.867428 | orchestrator | 2025-05-23 00:48:59 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:48:59.869316 | orchestrator | 2025-05-23 00:48:59 | INFO  | Task 459c53a9-d620-4eba-a1bf-8932e6282396 is in state STARTED 2025-05-23 00:48:59.869471 | orchestrator | 2025-05-23 00:48:59 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:49:02.914438 | orchestrator | 2025-05-23 00:49:02 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:49:02.914541 | orchestrator | 2025-05-23 00:49:02 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:49:02.914768 | orchestrator | 2025-05-23 00:49:02 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:49:02.915515 | orchestrator | 2025-05-23 00:49:02 | INFO  | Task 459c53a9-d620-4eba-a1bf-8932e6282396 is in state STARTED 2025-05-23 00:49:02.915646 | orchestrator | 2025-05-23 00:49:02 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:49:05.944880 | orchestrator | 2025-05-23 00:49:05 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:49:05.945242 | orchestrator | 2025-05-23 00:49:05 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:49:05.946002 | orchestrator | 2025-05-23 00:49:05 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:49:05.946751 | orchestrator | 2025-05-23 00:49:05 | INFO  | Task 459c53a9-d620-4eba-a1bf-8932e6282396 is in state STARTED 2025-05-23 00:49:05.946770 | orchestrator | 2025-05-23 00:49:05 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:49:08.982383 | orchestrator | 2025-05-23 00:49:08 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:49:08.983203 | orchestrator | 2025-05-23 00:49:08 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:49:08.984947 | orchestrator | 2025-05-23 00:49:08 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:49:08.986763 | orchestrator | 2025-05-23 00:49:08 | INFO  | Task 459c53a9-d620-4eba-a1bf-8932e6282396 is in state SUCCESS 2025-05-23 00:49:08.986928 | orchestrator | 2025-05-23 00:49:08.989444 | orchestrator | 2025-05-23 00:49:08.989484 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-23 00:49:08.989498 | orchestrator | 2025-05-23 00:49:08.989509 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-23 00:49:08.989520 | orchestrator | Friday 23 May 2025 00:46:49 +0000 (0:00:00.228) 0:00:00.228 ************ 2025-05-23 00:49:08.989531 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:49:08.989543 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:49:08.989554 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:49:08.989564 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:49:08.989575 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:49:08.989585 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:49:08.989596 | orchestrator | 2025-05-23 00:49:08.989607 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-23 00:49:08.989617 | orchestrator | Friday 23 May 2025 00:46:50 +0000 (0:00:00.701) 0:00:00.929 ************ 2025-05-23 00:49:08.989628 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-05-23 00:49:08.989639 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-05-23 00:49:08.989650 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-05-23 00:49:08.989661 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-05-23 00:49:08.989671 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-05-23 00:49:08.989682 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-05-23 00:49:08.989717 | orchestrator | 2025-05-23 00:49:08.989729 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-05-23 00:49:08.989739 | orchestrator | 2025-05-23 00:49:08.989750 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-05-23 00:49:08.989761 | orchestrator | Friday 23 May 2025 00:46:51 +0000 (0:00:01.680) 0:00:02.610 ************ 2025-05-23 00:49:08.989794 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 00:49:08.989806 | orchestrator | 2025-05-23 00:49:08.989817 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-05-23 00:49:08.989828 | orchestrator | Friday 23 May 2025 00:46:53 +0000 (0:00:01.439) 0:00:04.049 ************ 2025-05-23 00:49:08.989841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.989854 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.989865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.989876 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.989887 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.989925 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.989937 | orchestrator | 2025-05-23 00:49:08.989948 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-05-23 00:49:08.989959 | orchestrator | Friday 23 May 2025 00:46:54 +0000 (0:00:01.367) 0:00:05.416 ************ 2025-05-23 00:49:08.989970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.989988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.989999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.990010 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.990098 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.990111 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.990122 | orchestrator | 2025-05-23 00:49:08.990133 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-05-23 00:49:08.990143 | orchestrator | Friday 23 May 2025 00:46:57 +0000 (0:00:03.030) 0:00:08.447 ************ 2025-05-23 00:49:08.990155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.990171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.990194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.990206 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.990224 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.990235 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.990246 | orchestrator | 2025-05-23 00:49:08.990258 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-05-23 00:49:08.990269 | orchestrator | Friday 23 May 2025 00:46:58 +0000 (0:00:00.904) 0:00:09.352 ************ 2025-05-23 00:49:08.990279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.990291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.990302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.990312 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.990328 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.990346 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.990363 | orchestrator | 2025-05-23 00:49:08.990374 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-05-23 00:49:08.990385 | orchestrator | Friday 23 May 2025 00:47:00 +0000 (0:00:01.861) 0:00:11.214 ************ 2025-05-23 00:49:08.990396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.990407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.990418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.990429 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.990440 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.990451 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.990461 | orchestrator | 2025-05-23 00:49:08.990472 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-05-23 00:49:08.990483 | orchestrator | Friday 23 May 2025 00:47:01 +0000 (0:00:01.464) 0:00:12.678 ************ 2025-05-23 00:49:08.990494 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:49:08.990511 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:49:08.990532 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:49:08.990552 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:49:08.990572 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:49:08.990593 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:49:08.990625 | orchestrator | 2025-05-23 00:49:08.990637 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-05-23 00:49:08.990648 | orchestrator | Friday 23 May 2025 00:47:04 +0000 (0:00:02.874) 0:00:15.552 ************ 2025-05-23 00:49:08.990659 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-05-23 00:49:08.990674 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-05-23 00:49:08.990685 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-05-23 00:49:08.990739 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-05-23 00:49:08.990751 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-05-23 00:49:08.990762 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-05-23 00:49:08.990772 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-23 00:49:08.990783 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-23 00:49:08.990794 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-23 00:49:08.990805 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-23 00:49:08.990815 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-23 00:49:08.990826 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-23 00:49:08.990837 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-23 00:49:08.990849 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-23 00:49:08.990859 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-23 00:49:08.990870 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-23 00:49:08.990881 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-23 00:49:08.990892 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-23 00:49:08.990903 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-23 00:49:08.990914 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-23 00:49:08.990925 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-23 00:49:08.990936 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-23 00:49:08.990947 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-23 00:49:08.990958 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-23 00:49:08.990969 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-23 00:49:08.990980 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-23 00:49:08.990990 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-23 00:49:08.991001 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-23 00:49:08.991019 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-23 00:49:08.991030 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-23 00:49:08.991041 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-23 00:49:08.991052 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-23 00:49:08.991063 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-23 00:49:08.991074 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-23 00:49:08.991084 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-23 00:49:08.991096 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-23 00:49:08.991106 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-23 00:49:08.991117 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-23 00:49:08.991128 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-23 00:49:08.991143 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-23 00:49:08.991160 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-23 00:49:08.991172 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-23 00:49:08.991182 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-05-23 00:49:08.991193 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-05-23 00:49:08.991204 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-05-23 00:49:08.991215 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-05-23 00:49:08.991226 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-05-23 00:49:08.991236 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-05-23 00:49:08.991247 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-23 00:49:08.991258 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-23 00:49:08.991269 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-23 00:49:08.991279 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-23 00:49:08.991290 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-23 00:49:08.991301 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-23 00:49:08.991311 | orchestrator | 2025-05-23 00:49:08.991322 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-23 00:49:08.991333 | orchestrator | Friday 23 May 2025 00:47:24 +0000 (0:00:19.878) 0:00:35.430 ************ 2025-05-23 00:49:08.991350 | orchestrator | 2025-05-23 00:49:08.991361 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-23 00:49:08.991372 | orchestrator | Friday 23 May 2025 00:47:24 +0000 (0:00:00.055) 0:00:35.486 ************ 2025-05-23 00:49:08.991382 | orchestrator | 2025-05-23 00:49:08.991393 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-23 00:49:08.991404 | orchestrator | Friday 23 May 2025 00:47:24 +0000 (0:00:00.206) 0:00:35.692 ************ 2025-05-23 00:49:08.991414 | orchestrator | 2025-05-23 00:49:08.991425 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-23 00:49:08.991436 | orchestrator | Friday 23 May 2025 00:47:24 +0000 (0:00:00.050) 0:00:35.742 ************ 2025-05-23 00:49:08.991446 | orchestrator | 2025-05-23 00:49:08.991457 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-23 00:49:08.991468 | orchestrator | Friday 23 May 2025 00:47:24 +0000 (0:00:00.053) 0:00:35.796 ************ 2025-05-23 00:49:08.991478 | orchestrator | 2025-05-23 00:49:08.991488 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-23 00:49:08.991499 | orchestrator | Friday 23 May 2025 00:47:24 +0000 (0:00:00.053) 0:00:35.850 ************ 2025-05-23 00:49:08.991510 | orchestrator | 2025-05-23 00:49:08.991520 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-05-23 00:49:08.991531 | orchestrator | Friday 23 May 2025 00:47:25 +0000 (0:00:00.073) 0:00:35.924 ************ 2025-05-23 00:49:08.991541 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:49:08.991552 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:49:08.991563 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:49:08.991573 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:49:08.991584 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:49:08.991594 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:49:08.991605 | orchestrator | 2025-05-23 00:49:08.991616 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-05-23 00:49:08.991626 | orchestrator | Friday 23 May 2025 00:47:27 +0000 (0:00:02.040) 0:00:37.964 ************ 2025-05-23 00:49:08.991637 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:49:08.991648 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:49:08.991658 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:49:08.991669 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:49:08.991680 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:49:08.991741 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:49:08.991754 | orchestrator | 2025-05-23 00:49:08.991766 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-05-23 00:49:08.991776 | orchestrator | 2025-05-23 00:49:08.991787 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-23 00:49:08.991798 | orchestrator | Friday 23 May 2025 00:47:50 +0000 (0:00:23.324) 0:01:01.289 ************ 2025-05-23 00:49:08.991809 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:49:08.991820 | orchestrator | 2025-05-23 00:49:08.991830 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-23 00:49:08.991845 | orchestrator | Friday 23 May 2025 00:47:51 +0000 (0:00:01.330) 0:01:02.619 ************ 2025-05-23 00:49:08.991857 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:49:08.991868 | orchestrator | 2025-05-23 00:49:08.991885 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-05-23 00:49:08.991896 | orchestrator | Friday 23 May 2025 00:47:52 +0000 (0:00:00.588) 0:01:03.208 ************ 2025-05-23 00:49:08.991907 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:49:08.991918 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:49:08.991929 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:49:08.991939 | orchestrator | 2025-05-23 00:49:08.991950 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-05-23 00:49:08.991961 | orchestrator | Friday 23 May 2025 00:47:53 +0000 (0:00:01.047) 0:01:04.256 ************ 2025-05-23 00:49:08.991979 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:49:08.991990 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:49:08.992001 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:49:08.992012 | orchestrator | 2025-05-23 00:49:08.992023 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-05-23 00:49:08.992034 | orchestrator | Friday 23 May 2025 00:47:53 +0000 (0:00:00.490) 0:01:04.746 ************ 2025-05-23 00:49:08.992044 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:49:08.992055 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:49:08.992066 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:49:08.992077 | orchestrator | 2025-05-23 00:49:08.992096 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-05-23 00:49:08.992112 | orchestrator | Friday 23 May 2025 00:47:54 +0000 (0:00:00.510) 0:01:05.256 ************ 2025-05-23 00:49:08.992123 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:49:08.992133 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:49:08.992144 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:49:08.992154 | orchestrator | 2025-05-23 00:49:08.992165 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-05-23 00:49:08.992176 | orchestrator | Friday 23 May 2025 00:47:55 +0000 (0:00:00.789) 0:01:06.046 ************ 2025-05-23 00:49:08.992186 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:49:08.992197 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:49:08.992207 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:49:08.992218 | orchestrator | 2025-05-23 00:49:08.992227 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-05-23 00:49:08.992237 | orchestrator | Friday 23 May 2025 00:47:55 +0000 (0:00:00.531) 0:01:06.578 ************ 2025-05-23 00:49:08.992247 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:49:08.992256 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:49:08.992265 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:49:08.992275 | orchestrator | 2025-05-23 00:49:08.992285 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-05-23 00:49:08.992294 | orchestrator | Friday 23 May 2025 00:47:56 +0000 (0:00:00.654) 0:01:07.232 ************ 2025-05-23 00:49:08.992304 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:49:08.992313 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:49:08.992323 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:49:08.992332 | orchestrator | 2025-05-23 00:49:08.992342 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-05-23 00:49:08.992351 | orchestrator | Friday 23 May 2025 00:47:56 +0000 (0:00:00.348) 0:01:07.580 ************ 2025-05-23 00:49:08.992361 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:49:08.992370 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:49:08.992380 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:49:08.992389 | orchestrator | 2025-05-23 00:49:08.992399 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-05-23 00:49:08.992408 | orchestrator | Friday 23 May 2025 00:47:57 +0000 (0:00:00.424) 0:01:08.005 ************ 2025-05-23 00:49:08.992418 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:49:08.992427 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:49:08.992437 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:49:08.992446 | orchestrator | 2025-05-23 00:49:08.992456 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-05-23 00:49:08.992465 | orchestrator | Friday 23 May 2025 00:47:57 +0000 (0:00:00.232) 0:01:08.237 ************ 2025-05-23 00:49:08.992475 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:49:08.992484 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:49:08.992494 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:49:08.992503 | orchestrator | 2025-05-23 00:49:08.992513 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-05-23 00:49:08.992522 | orchestrator | Friday 23 May 2025 00:47:57 +0000 (0:00:00.325) 0:01:08.562 ************ 2025-05-23 00:49:08.992531 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:49:08.992551 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:49:08.992560 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:49:08.992570 | orchestrator | 2025-05-23 00:49:08.992579 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-05-23 00:49:08.992589 | orchestrator | Friday 23 May 2025 00:47:58 +0000 (0:00:00.352) 0:01:08.915 ************ 2025-05-23 00:49:08.992598 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:49:08.992608 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:49:08.992617 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:49:08.992627 | orchestrator | 2025-05-23 00:49:08.992637 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-05-23 00:49:08.992646 | orchestrator | Friday 23 May 2025 00:47:58 +0000 (0:00:00.295) 0:01:09.211 ************ 2025-05-23 00:49:08.992656 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:49:08.992665 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:49:08.992674 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:49:08.992684 | orchestrator | 2025-05-23 00:49:08.992714 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-05-23 00:49:08.992724 | orchestrator | Friday 23 May 2025 00:47:58 +0000 (0:00:00.239) 0:01:09.451 ************ 2025-05-23 00:49:08.992733 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:49:08.992743 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:49:08.992752 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:49:08.992761 | orchestrator | 2025-05-23 00:49:08.992771 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-05-23 00:49:08.992784 | orchestrator | Friday 23 May 2025 00:47:58 +0000 (0:00:00.322) 0:01:09.773 ************ 2025-05-23 00:49:08.992794 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:49:08.992804 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:49:08.992813 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:49:08.992823 | orchestrator | 2025-05-23 00:49:08.992837 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-05-23 00:49:08.992847 | orchestrator | Friday 23 May 2025 00:47:59 +0000 (0:00:00.357) 0:01:10.131 ************ 2025-05-23 00:49:08.992863 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:49:08.992875 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:49:08.992885 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:49:08.992894 | orchestrator | 2025-05-23 00:49:08.992904 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-05-23 00:49:08.992913 | orchestrator | Friday 23 May 2025 00:47:59 +0000 (0:00:00.240) 0:01:10.371 ************ 2025-05-23 00:49:08.992922 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:49:08.992932 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:49:08.992941 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:49:08.992951 | orchestrator | 2025-05-23 00:49:08.992960 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-23 00:49:08.992970 | orchestrator | Friday 23 May 2025 00:47:59 +0000 (0:00:00.381) 0:01:10.753 ************ 2025-05-23 00:49:08.992979 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:49:08.992989 | orchestrator | 2025-05-23 00:49:08.992998 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-05-23 00:49:08.993008 | orchestrator | Friday 23 May 2025 00:48:00 +0000 (0:00:00.647) 0:01:11.400 ************ 2025-05-23 00:49:08.993017 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:49:08.993027 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:49:08.993036 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:49:08.993045 | orchestrator | 2025-05-23 00:49:08.993055 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-05-23 00:49:08.993064 | orchestrator | Friday 23 May 2025 00:48:00 +0000 (0:00:00.357) 0:01:11.757 ************ 2025-05-23 00:49:08.993074 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:49:08.993083 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:49:08.993092 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:49:08.993107 | orchestrator | 2025-05-23 00:49:08.993117 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-05-23 00:49:08.993126 | orchestrator | Friday 23 May 2025 00:48:01 +0000 (0:00:00.520) 0:01:12.278 ************ 2025-05-23 00:49:08.993136 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:49:08.993145 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:49:08.993155 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:49:08.993164 | orchestrator | 2025-05-23 00:49:08.993174 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-05-23 00:49:08.993183 | orchestrator | Friday 23 May 2025 00:48:01 +0000 (0:00:00.552) 0:01:12.831 ************ 2025-05-23 00:49:08.993193 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:49:08.993202 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:49:08.993211 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:49:08.993221 | orchestrator | 2025-05-23 00:49:08.993230 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-05-23 00:49:08.993239 | orchestrator | Friday 23 May 2025 00:48:02 +0000 (0:00:00.920) 0:01:13.752 ************ 2025-05-23 00:49:08.993249 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:49:08.993258 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:49:08.993268 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:49:08.993277 | orchestrator | 2025-05-23 00:49:08.993287 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-05-23 00:49:08.993296 | orchestrator | Friday 23 May 2025 00:48:03 +0000 (0:00:00.747) 0:01:14.499 ************ 2025-05-23 00:49:08.993306 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:49:08.993315 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:49:08.993324 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:49:08.993334 | orchestrator | 2025-05-23 00:49:08.993343 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-05-23 00:49:08.993353 | orchestrator | Friday 23 May 2025 00:48:04 +0000 (0:00:00.607) 0:01:15.107 ************ 2025-05-23 00:49:08.993362 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:49:08.993371 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:49:08.993381 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:49:08.993390 | orchestrator | 2025-05-23 00:49:08.993400 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-05-23 00:49:08.993409 | orchestrator | Friday 23 May 2025 00:48:04 +0000 (0:00:00.497) 0:01:15.604 ************ 2025-05-23 00:49:08.993418 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:49:08.993428 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:49:08.993437 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:49:08.993446 | orchestrator | 2025-05-23 00:49:08.993456 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-05-23 00:49:08.993466 | orchestrator | Friday 23 May 2025 00:48:05 +0000 (0:00:00.361) 0:01:15.965 ************ 2025-05-23 00:49:08.993476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.993487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.993506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.993523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.993534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.993544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.993554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.993564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.993574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.993584 | orchestrator | 2025-05-23 00:49:08.993593 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-05-23 00:49:08.993603 | orchestrator | Friday 23 May 2025 00:48:06 +0000 (0:00:01.314) 0:01:17.280 ************ 2025-05-23 00:49:08.993613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.993623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.993636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.993656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.993667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.993677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.993703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.993714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.993724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.993734 | orchestrator | 2025-05-23 00:49:08.993743 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-05-23 00:49:08.993753 | orchestrator | Friday 23 May 2025 00:48:10 +0000 (0:00:03.851) 0:01:21.131 ************ 2025-05-23 00:49:08.993763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.993772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.993782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.993803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.993867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.993886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.993896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.993906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.993915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.993925 | orchestrator | 2025-05-23 00:49:08.993935 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-23 00:49:08.993944 | orchestrator | Friday 23 May 2025 00:48:12 +0000 (0:00:02.382) 0:01:23.514 ************ 2025-05-23 00:49:08.993954 | orchestrator | 2025-05-23 00:49:08.993963 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-23 00:49:08.993973 | orchestrator | Friday 23 May 2025 00:48:12 +0000 (0:00:00.106) 0:01:23.621 ************ 2025-05-23 00:49:08.993982 | orchestrator | 2025-05-23 00:49:08.993992 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-23 00:49:08.994001 | orchestrator | Friday 23 May 2025 00:48:12 +0000 (0:00:00.103) 0:01:23.724 ************ 2025-05-23 00:49:08.994010 | orchestrator | 2025-05-23 00:49:08.994053 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-05-23 00:49:08.994063 | orchestrator | Friday 23 May 2025 00:48:12 +0000 (0:00:00.101) 0:01:23.826 ************ 2025-05-23 00:49:08.994073 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:49:08.994090 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:49:08.994099 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:49:08.994109 | orchestrator | 2025-05-23 00:49:08.994118 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-05-23 00:49:08.994128 | orchestrator | Friday 23 May 2025 00:48:16 +0000 (0:00:03.276) 0:01:27.102 ************ 2025-05-23 00:49:08.994137 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:49:08.994146 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:49:08.994156 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:49:08.994165 | orchestrator | 2025-05-23 00:49:08.994175 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-05-23 00:49:08.994184 | orchestrator | Friday 23 May 2025 00:48:19 +0000 (0:00:03.065) 0:01:30.168 ************ 2025-05-23 00:49:08.994194 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:49:08.994203 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:49:08.994213 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:49:08.994222 | orchestrator | 2025-05-23 00:49:08.994231 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-05-23 00:49:08.994241 | orchestrator | Friday 23 May 2025 00:48:27 +0000 (0:00:08.270) 0:01:38.438 ************ 2025-05-23 00:49:08.994250 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:49:08.994260 | orchestrator | 2025-05-23 00:49:08.994269 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-05-23 00:49:08.994278 | orchestrator | Friday 23 May 2025 00:48:27 +0000 (0:00:00.133) 0:01:38.572 ************ 2025-05-23 00:49:08.994292 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:49:08.994302 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:49:08.994311 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:49:08.994320 | orchestrator | 2025-05-23 00:49:08.994338 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-05-23 00:49:08.994348 | orchestrator | Friday 23 May 2025 00:48:28 +0000 (0:00:00.958) 0:01:39.530 ************ 2025-05-23 00:49:08.994358 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:49:08.994367 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:49:08.994376 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:49:08.994386 | orchestrator | 2025-05-23 00:49:08.994395 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-05-23 00:49:08.994405 | orchestrator | Friday 23 May 2025 00:48:29 +0000 (0:00:00.630) 0:01:40.161 ************ 2025-05-23 00:49:08.994414 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:49:08.994424 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:49:08.994433 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:49:08.994443 | orchestrator | 2025-05-23 00:49:08.994452 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-05-23 00:49:08.994462 | orchestrator | Friday 23 May 2025 00:48:30 +0000 (0:00:00.985) 0:01:41.147 ************ 2025-05-23 00:49:08.994471 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:49:08.994481 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:49:08.994491 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:49:08.994500 | orchestrator | 2025-05-23 00:49:08.994510 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-05-23 00:49:08.994519 | orchestrator | Friday 23 May 2025 00:48:30 +0000 (0:00:00.590) 0:01:41.738 ************ 2025-05-23 00:49:08.994529 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:49:08.994538 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:49:08.994548 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:49:08.994557 | orchestrator | 2025-05-23 00:49:08.994567 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-05-23 00:49:08.994576 | orchestrator | Friday 23 May 2025 00:48:32 +0000 (0:00:01.192) 0:01:42.930 ************ 2025-05-23 00:49:08.994586 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:49:08.994595 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:49:08.994605 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:49:08.994614 | orchestrator | 2025-05-23 00:49:08.994623 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-05-23 00:49:08.994638 | orchestrator | Friday 23 May 2025 00:48:32 +0000 (0:00:00.720) 0:01:43.651 ************ 2025-05-23 00:49:08.994647 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:49:08.994657 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:49:08.994666 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:49:08.994675 | orchestrator | 2025-05-23 00:49:08.994685 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-05-23 00:49:08.994755 | orchestrator | Friday 23 May 2025 00:48:33 +0000 (0:00:00.410) 0:01:44.061 ************ 2025-05-23 00:49:08.994770 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.994780 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.994790 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.994800 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.994810 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.994825 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.994845 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.994870 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.994891 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.994916 | orchestrator | 2025-05-23 00:49:08.994931 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-05-23 00:49:08.994947 | orchestrator | Friday 23 May 2025 00:48:34 +0000 (0:00:01.569) 0:01:45.631 ************ 2025-05-23 00:49:08.994963 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.994979 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.994996 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.995011 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.995022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.995031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.995054 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.995065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.995081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.995091 | orchestrator | 2025-05-23 00:49:08.995101 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-05-23 00:49:08.995110 | orchestrator | Friday 23 May 2025 00:48:38 +0000 (0:00:03.958) 0:01:49.590 ************ 2025-05-23 00:49:08.995120 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.995130 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.995140 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.995150 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.995160 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.995170 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.995180 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.995198 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.995214 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 00:49:08.995224 | orchestrator | 2025-05-23 00:49:08.995234 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-23 00:49:08.995242 | orchestrator | Friday 23 May 2025 00:48:41 +0000 (0:00:02.985) 0:01:52.575 ************ 2025-05-23 00:49:08.995250 | orchestrator | 2025-05-23 00:49:08.995258 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-23 00:49:08.995266 | orchestrator | Friday 23 May 2025 00:48:41 +0000 (0:00:00.061) 0:01:52.636 ************ 2025-05-23 00:49:08.995273 | orchestrator | 2025-05-23 00:49:08.995281 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-23 00:49:08.995289 | orchestrator | Friday 23 May 2025 00:48:41 +0000 (0:00:00.185) 0:01:52.822 ************ 2025-05-23 00:49:08.995297 | orchestrator | 2025-05-23 00:49:08.995304 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-05-23 00:49:08.995312 | orchestrator | Friday 23 May 2025 00:48:41 +0000 (0:00:00.055) 0:01:52.877 ************ 2025-05-23 00:49:08.995320 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:49:08.995327 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:49:08.995335 | orchestrator | 2025-05-23 00:49:08.995343 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-05-23 00:49:08.995351 | orchestrator | Friday 23 May 2025 00:48:48 +0000 (0:00:06.254) 0:01:59.132 ************ 2025-05-23 00:49:08.995358 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:49:08.995366 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:49:08.995374 | orchestrator | 2025-05-23 00:49:08.995382 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-05-23 00:49:08.995390 | orchestrator | Friday 23 May 2025 00:48:54 +0000 (0:00:06.506) 0:02:05.638 ************ 2025-05-23 00:49:08.995397 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:49:08.995405 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:49:08.995413 | orchestrator | 2025-05-23 00:49:08.995421 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-05-23 00:49:08.995428 | orchestrator | Friday 23 May 2025 00:49:01 +0000 (0:00:06.275) 0:02:11.914 ************ 2025-05-23 00:49:08.995436 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:49:08.995444 | orchestrator | 2025-05-23 00:49:08.995452 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-05-23 00:49:08.995459 | orchestrator | Friday 23 May 2025 00:49:01 +0000 (0:00:00.341) 0:02:12.256 ************ 2025-05-23 00:49:08.995467 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:49:08.995475 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:49:08.995483 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:49:08.995490 | orchestrator | 2025-05-23 00:49:08.995498 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-05-23 00:49:08.995506 | orchestrator | Friday 23 May 2025 00:49:02 +0000 (0:00:00.823) 0:02:13.079 ************ 2025-05-23 00:49:08.995514 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:49:08.995521 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:49:08.995529 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:49:08.995537 | orchestrator | 2025-05-23 00:49:08.995545 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-05-23 00:49:08.995552 | orchestrator | Friday 23 May 2025 00:49:02 +0000 (0:00:00.679) 0:02:13.759 ************ 2025-05-23 00:49:08.995560 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:49:08.995568 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:49:08.995575 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:49:08.995583 | orchestrator | 2025-05-23 00:49:08.995591 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-05-23 00:49:08.995604 | orchestrator | Friday 23 May 2025 00:49:04 +0000 (0:00:01.248) 0:02:15.008 ************ 2025-05-23 00:49:08.995611 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:49:08.995619 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:49:08.995627 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:49:08.995635 | orchestrator | 2025-05-23 00:49:08.995642 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-05-23 00:49:08.995650 | orchestrator | Friday 23 May 2025 00:49:05 +0000 (0:00:01.080) 0:02:16.089 ************ 2025-05-23 00:49:08.995658 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:49:08.995665 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:49:08.995673 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:49:08.995681 | orchestrator | 2025-05-23 00:49:08.995706 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-05-23 00:49:08.995715 | orchestrator | Friday 23 May 2025 00:49:06 +0000 (0:00:00.860) 0:02:16.949 ************ 2025-05-23 00:49:08.995723 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:49:08.995730 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:49:08.995738 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:49:08.995746 | orchestrator | 2025-05-23 00:49:08.995753 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 00:49:08.995761 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-05-23 00:49:08.995773 | orchestrator | testbed-node-1 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-05-23 00:49:08.995785 | orchestrator | testbed-node-2 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-05-23 00:49:08.995793 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 00:49:08.995801 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 00:49:08.995809 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 00:49:08.995817 | orchestrator | 2025-05-23 00:49:08.995825 | orchestrator | 2025-05-23 00:49:08.995833 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-23 00:49:08.995841 | orchestrator | Friday 23 May 2025 00:49:07 +0000 (0:00:00.941) 0:02:17.890 ************ 2025-05-23 00:49:08.995849 | orchestrator | =============================================================================== 2025-05-23 00:49:08.995857 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 23.32s 2025-05-23 00:49:08.995865 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 19.88s 2025-05-23 00:49:08.995873 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 14.55s 2025-05-23 00:49:08.995880 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 9.57s 2025-05-23 00:49:08.995888 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 9.53s 2025-05-23 00:49:08.995896 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.96s 2025-05-23 00:49:08.995904 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.85s 2025-05-23 00:49:08.995912 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 3.03s 2025-05-23 00:49:08.995920 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.99s 2025-05-23 00:49:08.995927 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.87s 2025-05-23 00:49:08.995935 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.38s 2025-05-23 00:49:08.995948 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.04s 2025-05-23 00:49:08.995956 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.86s 2025-05-23 00:49:08.995963 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.68s 2025-05-23 00:49:08.995971 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.57s 2025-05-23 00:49:08.995979 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.46s 2025-05-23 00:49:08.995987 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.44s 2025-05-23 00:49:08.995995 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.37s 2025-05-23 00:49:08.996002 | orchestrator | ovn-db : include_tasks -------------------------------------------------- 1.33s 2025-05-23 00:49:08.996010 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.31s 2025-05-23 00:49:08.996018 | orchestrator | 2025-05-23 00:49:08 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:49:12.026937 | orchestrator | 2025-05-23 00:49:12 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:49:12.028437 | orchestrator | 2025-05-23 00:49:12 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:49:12.030829 | orchestrator | 2025-05-23 00:49:12 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:49:12.031640 | orchestrator | 2025-05-23 00:49:12 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:49:15.087615 | orchestrator | 2025-05-23 00:49:15 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:49:15.089861 | orchestrator | 2025-05-23 00:49:15 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:49:15.091834 | orchestrator | 2025-05-23 00:49:15 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:49:15.091912 | orchestrator | 2025-05-23 00:49:15 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:49:18.137580 | orchestrator | 2025-05-23 00:49:18 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:49:18.141609 | orchestrator | 2025-05-23 00:49:18 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:49:18.142764 | orchestrator | 2025-05-23 00:49:18 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:49:18.142791 | orchestrator | 2025-05-23 00:49:18 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:49:21.183259 | orchestrator | 2025-05-23 00:49:21 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:49:21.183926 | orchestrator | 2025-05-23 00:49:21 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:49:21.185228 | orchestrator | 2025-05-23 00:49:21 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:49:21.185248 | orchestrator | 2025-05-23 00:49:21 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:49:24.221555 | orchestrator | 2025-05-23 00:49:24 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:49:24.222476 | orchestrator | 2025-05-23 00:49:24 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:49:24.223098 | orchestrator | 2025-05-23 00:49:24 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:49:24.223119 | orchestrator | 2025-05-23 00:49:24 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:49:27.247722 | orchestrator | 2025-05-23 00:49:27 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:49:27.247806 | orchestrator | 2025-05-23 00:49:27 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:49:27.249378 | orchestrator | 2025-05-23 00:49:27 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:49:27.249401 | orchestrator | 2025-05-23 00:49:27 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:49:30.293107 | orchestrator | 2025-05-23 00:49:30 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:49:30.293383 | orchestrator | 2025-05-23 00:49:30 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:49:30.294169 | orchestrator | 2025-05-23 00:49:30 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:49:30.294204 | orchestrator | 2025-05-23 00:49:30 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:49:33.349161 | orchestrator | 2025-05-23 00:49:33 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:49:33.350445 | orchestrator | 2025-05-23 00:49:33 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:49:33.351989 | orchestrator | 2025-05-23 00:49:33 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:49:33.352174 | orchestrator | 2025-05-23 00:49:33 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:49:36.406224 | orchestrator | 2025-05-23 00:49:36 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:49:36.407137 | orchestrator | 2025-05-23 00:49:36 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:49:36.409355 | orchestrator | 2025-05-23 00:49:36 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:49:36.409393 | orchestrator | 2025-05-23 00:49:36 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:49:39.468181 | orchestrator | 2025-05-23 00:49:39 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:49:39.469234 | orchestrator | 2025-05-23 00:49:39 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:49:39.469266 | orchestrator | 2025-05-23 00:49:39 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:49:39.469279 | orchestrator | 2025-05-23 00:49:39 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:49:42.507969 | orchestrator | 2025-05-23 00:49:42 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:49:42.508395 | orchestrator | 2025-05-23 00:49:42 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:49:42.509147 | orchestrator | 2025-05-23 00:49:42 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:49:42.509173 | orchestrator | 2025-05-23 00:49:42 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:49:45.553103 | orchestrator | 2025-05-23 00:49:45 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:49:45.553357 | orchestrator | 2025-05-23 00:49:45 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:49:45.555843 | orchestrator | 2025-05-23 00:49:45 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:49:45.555881 | orchestrator | 2025-05-23 00:49:45 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:49:48.606546 | orchestrator | 2025-05-23 00:49:48 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:49:48.607595 | orchestrator | 2025-05-23 00:49:48 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:49:48.609318 | orchestrator | 2025-05-23 00:49:48 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:49:48.609598 | orchestrator | 2025-05-23 00:49:48 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:49:51.655147 | orchestrator | 2025-05-23 00:49:51 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:49:51.658725 | orchestrator | 2025-05-23 00:49:51 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:49:51.660837 | orchestrator | 2025-05-23 00:49:51 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:49:51.661036 | orchestrator | 2025-05-23 00:49:51 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:49:54.709350 | orchestrator | 2025-05-23 00:49:54 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:49:54.710444 | orchestrator | 2025-05-23 00:49:54 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:49:54.711405 | orchestrator | 2025-05-23 00:49:54 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:49:54.711440 | orchestrator | 2025-05-23 00:49:54 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:49:57.755729 | orchestrator | 2025-05-23 00:49:57 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:49:57.760133 | orchestrator | 2025-05-23 00:49:57 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:49:57.760171 | orchestrator | 2025-05-23 00:49:57 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:49:57.760184 | orchestrator | 2025-05-23 00:49:57 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:50:00.813484 | orchestrator | 2025-05-23 00:50:00 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:50:00.813577 | orchestrator | 2025-05-23 00:50:00 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:50:00.814641 | orchestrator | 2025-05-23 00:50:00 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:50:00.814932 | orchestrator | 2025-05-23 00:50:00 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:50:03.915107 | orchestrator | 2025-05-23 00:50:03 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:50:03.916293 | orchestrator | 2025-05-23 00:50:03 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:50:03.917336 | orchestrator | 2025-05-23 00:50:03 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:50:03.917362 | orchestrator | 2025-05-23 00:50:03 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:50:06.985492 | orchestrator | 2025-05-23 00:50:06 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:50:06.985986 | orchestrator | 2025-05-23 00:50:06 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:50:06.987156 | orchestrator | 2025-05-23 00:50:06 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:50:06.987185 | orchestrator | 2025-05-23 00:50:06 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:50:10.057820 | orchestrator | 2025-05-23 00:50:10 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:50:10.058738 | orchestrator | 2025-05-23 00:50:10 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:50:10.061456 | orchestrator | 2025-05-23 00:50:10 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:50:10.061961 | orchestrator | 2025-05-23 00:50:10 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:50:13.117294 | orchestrator | 2025-05-23 00:50:13 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:50:13.118328 | orchestrator | 2025-05-23 00:50:13 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:50:13.120122 | orchestrator | 2025-05-23 00:50:13 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:50:13.120146 | orchestrator | 2025-05-23 00:50:13 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:50:16.171072 | orchestrator | 2025-05-23 00:50:16 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:50:16.173102 | orchestrator | 2025-05-23 00:50:16 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:50:16.173148 | orchestrator | 2025-05-23 00:50:16 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:50:16.173161 | orchestrator | 2025-05-23 00:50:16 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:50:19.234840 | orchestrator | 2025-05-23 00:50:19 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:50:19.237319 | orchestrator | 2025-05-23 00:50:19 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:50:19.239459 | orchestrator | 2025-05-23 00:50:19 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:50:19.239498 | orchestrator | 2025-05-23 00:50:19 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:50:22.291104 | orchestrator | 2025-05-23 00:50:22 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:50:22.291207 | orchestrator | 2025-05-23 00:50:22 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:50:22.291802 | orchestrator | 2025-05-23 00:50:22 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:50:22.291829 | orchestrator | 2025-05-23 00:50:22 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:50:25.341102 | orchestrator | 2025-05-23 00:50:25 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:50:25.341241 | orchestrator | 2025-05-23 00:50:25 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:50:25.341337 | orchestrator | 2025-05-23 00:50:25 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:50:25.341354 | orchestrator | 2025-05-23 00:50:25 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:50:28.392710 | orchestrator | 2025-05-23 00:50:28 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:50:28.392820 | orchestrator | 2025-05-23 00:50:28 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:50:28.393235 | orchestrator | 2025-05-23 00:50:28 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:50:28.393271 | orchestrator | 2025-05-23 00:50:28 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:50:31.441624 | orchestrator | 2025-05-23 00:50:31 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:50:31.443333 | orchestrator | 2025-05-23 00:50:31 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:50:31.443381 | orchestrator | 2025-05-23 00:50:31 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:50:31.443468 | orchestrator | 2025-05-23 00:50:31 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:50:34.500417 | orchestrator | 2025-05-23 00:50:34 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:50:34.500564 | orchestrator | 2025-05-23 00:50:34 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:50:34.503242 | orchestrator | 2025-05-23 00:50:34 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:50:34.503300 | orchestrator | 2025-05-23 00:50:34 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:50:37.545971 | orchestrator | 2025-05-23 00:50:37 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:50:37.546500 | orchestrator | 2025-05-23 00:50:37 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:50:37.547514 | orchestrator | 2025-05-23 00:50:37 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:50:37.547789 | orchestrator | 2025-05-23 00:50:37 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:50:40.597248 | orchestrator | 2025-05-23 00:50:40 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:50:40.597351 | orchestrator | 2025-05-23 00:50:40 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:50:40.600007 | orchestrator | 2025-05-23 00:50:40 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:50:40.600251 | orchestrator | 2025-05-23 00:50:40 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:50:43.642677 | orchestrator | 2025-05-23 00:50:43 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:50:43.643238 | orchestrator | 2025-05-23 00:50:43 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:50:43.644472 | orchestrator | 2025-05-23 00:50:43 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:50:43.644492 | orchestrator | 2025-05-23 00:50:43 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:50:46.694432 | orchestrator | 2025-05-23 00:50:46 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:50:46.694805 | orchestrator | 2025-05-23 00:50:46 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:50:46.695679 | orchestrator | 2025-05-23 00:50:46 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:50:46.695719 | orchestrator | 2025-05-23 00:50:46 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:50:49.743772 | orchestrator | 2025-05-23 00:50:49 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:50:49.744735 | orchestrator | 2025-05-23 00:50:49 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:50:49.744879 | orchestrator | 2025-05-23 00:50:49 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:50:49.744981 | orchestrator | 2025-05-23 00:50:49 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:50:52.785310 | orchestrator | 2025-05-23 00:50:52 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:50:52.785438 | orchestrator | 2025-05-23 00:50:52 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:50:52.785456 | orchestrator | 2025-05-23 00:50:52 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:50:52.785468 | orchestrator | 2025-05-23 00:50:52 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:50:55.818118 | orchestrator | 2025-05-23 00:50:55 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:50:55.818286 | orchestrator | 2025-05-23 00:50:55 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:50:55.821045 | orchestrator | 2025-05-23 00:50:55 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:50:55.821080 | orchestrator | 2025-05-23 00:50:55 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:50:58.868950 | orchestrator | 2025-05-23 00:50:58 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:50:58.869135 | orchestrator | 2025-05-23 00:50:58 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:50:58.870589 | orchestrator | 2025-05-23 00:50:58 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:50:58.870652 | orchestrator | 2025-05-23 00:50:58 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:51:01.924043 | orchestrator | 2025-05-23 00:51:01 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:51:01.926703 | orchestrator | 2025-05-23 00:51:01 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:51:01.928630 | orchestrator | 2025-05-23 00:51:01 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:51:01.929306 | orchestrator | 2025-05-23 00:51:01 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:51:04.966114 | orchestrator | 2025-05-23 00:51:04 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:51:04.966987 | orchestrator | 2025-05-23 00:51:04 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:51:04.969025 | orchestrator | 2025-05-23 00:51:04 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:51:04.969046 | orchestrator | 2025-05-23 00:51:04 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:51:08.017700 | orchestrator | 2025-05-23 00:51:08 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:51:08.017834 | orchestrator | 2025-05-23 00:51:08 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:51:08.017850 | orchestrator | 2025-05-23 00:51:08 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:51:08.017862 | orchestrator | 2025-05-23 00:51:08 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:51:11.070366 | orchestrator | 2025-05-23 00:51:11 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:51:11.071540 | orchestrator | 2025-05-23 00:51:11 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:51:11.072695 | orchestrator | 2025-05-23 00:51:11 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:51:11.072898 | orchestrator | 2025-05-23 00:51:11 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:51:14.118813 | orchestrator | 2025-05-23 00:51:14 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:51:14.120253 | orchestrator | 2025-05-23 00:51:14 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:51:14.122869 | orchestrator | 2025-05-23 00:51:14 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:51:14.122911 | orchestrator | 2025-05-23 00:51:14 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:51:17.185004 | orchestrator | 2025-05-23 00:51:17 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:51:17.186581 | orchestrator | 2025-05-23 00:51:17 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:51:17.188290 | orchestrator | 2025-05-23 00:51:17 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:51:17.188682 | orchestrator | 2025-05-23 00:51:17 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:51:20.248860 | orchestrator | 2025-05-23 00:51:20 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:51:20.250219 | orchestrator | 2025-05-23 00:51:20 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:51:20.250255 | orchestrator | 2025-05-23 00:51:20 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:51:20.250268 | orchestrator | 2025-05-23 00:51:20 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:51:23.302295 | orchestrator | 2025-05-23 00:51:23 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:51:23.303255 | orchestrator | 2025-05-23 00:51:23 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:51:23.305128 | orchestrator | 2025-05-23 00:51:23 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:51:23.305237 | orchestrator | 2025-05-23 00:51:23 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:51:26.363011 | orchestrator | 2025-05-23 00:51:26 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:51:26.363145 | orchestrator | 2025-05-23 00:51:26 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:51:26.363171 | orchestrator | 2025-05-23 00:51:26 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:51:26.363268 | orchestrator | 2025-05-23 00:51:26 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:51:29.417317 | orchestrator | 2025-05-23 00:51:29 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:51:29.418752 | orchestrator | 2025-05-23 00:51:29 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:51:29.420396 | orchestrator | 2025-05-23 00:51:29 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:51:29.420423 | orchestrator | 2025-05-23 00:51:29 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:51:32.469939 | orchestrator | 2025-05-23 00:51:32 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:51:32.472201 | orchestrator | 2025-05-23 00:51:32 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:51:32.474056 | orchestrator | 2025-05-23 00:51:32 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:51:32.474140 | orchestrator | 2025-05-23 00:51:32 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:51:35.518688 | orchestrator | 2025-05-23 00:51:35 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:51:35.521247 | orchestrator | 2025-05-23 00:51:35 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:51:35.526514 | orchestrator | 2025-05-23 00:51:35 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:51:35.526707 | orchestrator | 2025-05-23 00:51:35 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:51:38.578475 | orchestrator | 2025-05-23 00:51:38 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:51:38.579116 | orchestrator | 2025-05-23 00:51:38 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:51:38.582563 | orchestrator | 2025-05-23 00:51:38 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:51:38.582681 | orchestrator | 2025-05-23 00:51:38 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:51:41.632526 | orchestrator | 2025-05-23 00:51:41 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:51:41.633885 | orchestrator | 2025-05-23 00:51:41 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:51:41.637054 | orchestrator | 2025-05-23 00:51:41 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:51:41.637136 | orchestrator | 2025-05-23 00:51:41 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:51:44.687732 | orchestrator | 2025-05-23 00:51:44 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:51:44.689452 | orchestrator | 2025-05-23 00:51:44 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:51:44.691249 | orchestrator | 2025-05-23 00:51:44 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:51:44.691297 | orchestrator | 2025-05-23 00:51:44 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:51:47.741759 | orchestrator | 2025-05-23 00:51:47 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:51:47.743825 | orchestrator | 2025-05-23 00:51:47 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:51:47.743849 | orchestrator | 2025-05-23 00:51:47 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:51:47.743857 | orchestrator | 2025-05-23 00:51:47 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:51:50.808054 | orchestrator | 2025-05-23 00:51:50 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:51:50.808163 | orchestrator | 2025-05-23 00:51:50 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:51:50.808178 | orchestrator | 2025-05-23 00:51:50 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:51:50.808191 | orchestrator | 2025-05-23 00:51:50 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:51:53.857760 | orchestrator | 2025-05-23 00:51:53 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:51:53.859064 | orchestrator | 2025-05-23 00:51:53 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:51:53.860154 | orchestrator | 2025-05-23 00:51:53 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:51:53.860180 | orchestrator | 2025-05-23 00:51:53 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:51:56.913680 | orchestrator | 2025-05-23 00:51:56 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:51:56.914416 | orchestrator | 2025-05-23 00:51:56 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:51:56.915268 | orchestrator | 2025-05-23 00:51:56 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:51:56.915296 | orchestrator | 2025-05-23 00:51:56 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:51:59.971725 | orchestrator | 2025-05-23 00:51:59 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:51:59.973665 | orchestrator | 2025-05-23 00:51:59 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:51:59.974824 | orchestrator | 2025-05-23 00:51:59 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:51:59.974856 | orchestrator | 2025-05-23 00:51:59 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:52:03.018191 | orchestrator | 2025-05-23 00:52:03 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:52:03.020082 | orchestrator | 2025-05-23 00:52:03 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:52:03.024072 | orchestrator | 2025-05-23 00:52:03 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:52:03.024228 | orchestrator | 2025-05-23 00:52:03 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:52:06.071392 | orchestrator | 2025-05-23 00:52:06 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:52:06.071637 | orchestrator | 2025-05-23 00:52:06 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:52:06.074804 | orchestrator | 2025-05-23 00:52:06 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:52:06.074839 | orchestrator | 2025-05-23 00:52:06 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:52:09.124395 | orchestrator | 2025-05-23 00:52:09 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:52:09.125248 | orchestrator | 2025-05-23 00:52:09 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:52:09.126752 | orchestrator | 2025-05-23 00:52:09 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:52:09.127690 | orchestrator | 2025-05-23 00:52:09 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:52:12.167729 | orchestrator | 2025-05-23 00:52:12 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:52:12.169169 | orchestrator | 2025-05-23 00:52:12 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:52:12.171104 | orchestrator | 2025-05-23 00:52:12 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:52:12.171132 | orchestrator | 2025-05-23 00:52:12 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:52:15.213328 | orchestrator | 2025-05-23 00:52:15 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:52:15.214290 | orchestrator | 2025-05-23 00:52:15 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:52:15.215829 | orchestrator | 2025-05-23 00:52:15 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:52:15.215882 | orchestrator | 2025-05-23 00:52:15 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:52:18.270071 | orchestrator | 2025-05-23 00:52:18 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:52:18.271949 | orchestrator | 2025-05-23 00:52:18 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:52:18.273683 | orchestrator | 2025-05-23 00:52:18 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:52:18.273718 | orchestrator | 2025-05-23 00:52:18 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:52:21.314483 | orchestrator | 2025-05-23 00:52:21 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:52:21.315662 | orchestrator | 2025-05-23 00:52:21 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:52:21.317468 | orchestrator | 2025-05-23 00:52:21 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:52:21.317604 | orchestrator | 2025-05-23 00:52:21 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:52:24.360306 | orchestrator | 2025-05-23 00:52:24 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:52:24.361373 | orchestrator | 2025-05-23 00:52:24 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:52:24.363101 | orchestrator | 2025-05-23 00:52:24 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:52:24.363160 | orchestrator | 2025-05-23 00:52:24 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:52:27.418293 | orchestrator | 2025-05-23 00:52:27 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:52:27.419047 | orchestrator | 2025-05-23 00:52:27 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:52:27.419460 | orchestrator | 2025-05-23 00:52:27 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:52:27.419493 | orchestrator | 2025-05-23 00:52:27 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:52:30.466393 | orchestrator | 2025-05-23 00:52:30 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:52:30.469403 | orchestrator | 2025-05-23 00:52:30 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:52:30.471797 | orchestrator | 2025-05-23 00:52:30 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:52:30.471919 | orchestrator | 2025-05-23 00:52:30 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:52:33.528478 | orchestrator | 2025-05-23 00:52:33 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:52:33.531400 | orchestrator | 2025-05-23 00:52:33 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:52:33.533805 | orchestrator | 2025-05-23 00:52:33 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:52:33.533853 | orchestrator | 2025-05-23 00:52:33 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:52:36.580167 | orchestrator | 2025-05-23 00:52:36 | INFO  | Task fd9628d3-8872-4568-9ec9-8e34a798a6cb is in state STARTED 2025-05-23 00:52:36.580277 | orchestrator | 2025-05-23 00:52:36 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:52:36.580293 | orchestrator | 2025-05-23 00:52:36 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:52:36.582660 | orchestrator | 2025-05-23 00:52:36 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:52:36.582746 | orchestrator | 2025-05-23 00:52:36 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:52:39.632405 | orchestrator | 2025-05-23 00:52:39 | INFO  | Task fd9628d3-8872-4568-9ec9-8e34a798a6cb is in state STARTED 2025-05-23 00:52:39.632591 | orchestrator | 2025-05-23 00:52:39 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:52:39.632688 | orchestrator | 2025-05-23 00:52:39 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:52:39.635742 | orchestrator | 2025-05-23 00:52:39 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state STARTED 2025-05-23 00:52:39.635766 | orchestrator | 2025-05-23 00:52:39 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:52:42.678239 | orchestrator | 2025-05-23 00:52:42 | INFO  | Task fd9628d3-8872-4568-9ec9-8e34a798a6cb is in state STARTED 2025-05-23 00:52:42.678458 | orchestrator | 2025-05-23 00:52:42 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:52:42.679073 | orchestrator | 2025-05-23 00:52:42 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:52:42.688502 | orchestrator | 2025-05-23 00:52:42 | INFO  | Task 654697ca-fa9d-4655-b59e-c16850db796a is in state SUCCESS 2025-05-23 00:52:42.689359 | orchestrator | 2025-05-23 00:52:42.689391 | orchestrator | 2025-05-23 00:52:42.689406 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-23 00:52:42.689419 | orchestrator | 2025-05-23 00:52:42.689431 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-23 00:52:42.689444 | orchestrator | Friday 23 May 2025 00:45:34 +0000 (0:00:00.275) 0:00:00.275 ************ 2025-05-23 00:52:42.689455 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:52:42.689466 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:52:42.689477 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:52:42.689488 | orchestrator | 2025-05-23 00:52:42.689498 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-23 00:52:42.689509 | orchestrator | Friday 23 May 2025 00:45:34 +0000 (0:00:00.388) 0:00:00.663 ************ 2025-05-23 00:52:42.689520 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-05-23 00:52:42.689567 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-05-23 00:52:42.689580 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-05-23 00:52:42.689591 | orchestrator | 2025-05-23 00:52:42.689601 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-05-23 00:52:42.689707 | orchestrator | 2025-05-23 00:52:42.690199 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-05-23 00:52:42.690223 | orchestrator | Friday 23 May 2025 00:45:34 +0000 (0:00:00.325) 0:00:00.988 ************ 2025-05-23 00:52:42.690234 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:52:42.690245 | orchestrator | 2025-05-23 00:52:42.690256 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-05-23 00:52:42.690266 | orchestrator | Friday 23 May 2025 00:45:35 +0000 (0:00:00.840) 0:00:01.829 ************ 2025-05-23 00:52:42.690277 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:52:42.690288 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:52:42.690299 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:52:42.690310 | orchestrator | 2025-05-23 00:52:42.690320 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-05-23 00:52:42.690331 | orchestrator | Friday 23 May 2025 00:45:36 +0000 (0:00:00.903) 0:00:02.732 ************ 2025-05-23 00:52:42.690342 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:52:42.690353 | orchestrator | 2025-05-23 00:52:42.690364 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-05-23 00:52:42.690375 | orchestrator | Friday 23 May 2025 00:45:37 +0000 (0:00:00.866) 0:00:03.598 ************ 2025-05-23 00:52:42.690385 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:52:42.690396 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:52:42.690406 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:52:42.690417 | orchestrator | 2025-05-23 00:52:42.690428 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-05-23 00:52:42.690439 | orchestrator | Friday 23 May 2025 00:45:38 +0000 (0:00:01.154) 0:00:04.753 ************ 2025-05-23 00:52:42.690464 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-23 00:52:42.690475 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-23 00:52:42.690486 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-23 00:52:42.690496 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-23 00:52:42.690547 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-23 00:52:42.690561 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-23 00:52:42.690572 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-23 00:52:42.690583 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-23 00:52:42.690594 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-23 00:52:42.690605 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-23 00:52:42.690615 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-23 00:52:42.690626 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-23 00:52:42.690647 | orchestrator | 2025-05-23 00:52:42.690658 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-23 00:52:42.690669 | orchestrator | Friday 23 May 2025 00:45:41 +0000 (0:00:03.172) 0:00:07.925 ************ 2025-05-23 00:52:42.690679 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-05-23 00:52:42.690690 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-05-23 00:52:42.690701 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-05-23 00:52:42.690712 | orchestrator | 2025-05-23 00:52:42.691139 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-23 00:52:42.691157 | orchestrator | Friday 23 May 2025 00:45:42 +0000 (0:00:01.070) 0:00:08.996 ************ 2025-05-23 00:52:42.691168 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-05-23 00:52:42.691179 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-05-23 00:52:42.691190 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-05-23 00:52:42.691202 | orchestrator | 2025-05-23 00:52:42.691213 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-23 00:52:42.691223 | orchestrator | Friday 23 May 2025 00:45:44 +0000 (0:00:01.813) 0:00:10.809 ************ 2025-05-23 00:52:42.691234 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-05-23 00:52:42.691245 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.691268 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-05-23 00:52:42.691280 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.691290 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-05-23 00:52:42.691301 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.691311 | orchestrator | 2025-05-23 00:52:42.691322 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-05-23 00:52:42.691333 | orchestrator | Friday 23 May 2025 00:45:45 +0000 (0:00:00.732) 0:00:11.541 ************ 2025-05-23 00:52:42.691347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-23 00:52:42.691363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-23 00:52:42.691385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-23 00:52:42.691427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-23 00:52:42.691440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-23 00:52:42.691461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-23 00:52:42.691473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-23 00:52:42.691485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a20bf05684f3468f07736ee22e027955f06378f6', '__omit_place_holder__a20bf05684f3468f07736ee22e027955f06378f6'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-23 00:52:42.691503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-23 00:52:42.691519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a20bf05684f3468f07736ee22e027955f06378f6', '__omit_place_holder__a20bf05684f3468f07736ee22e027955f06378f6'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-23 00:52:42.691581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-23 00:52:42.691603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a20bf05684f3468f07736ee22e027955f06378f6', '__omit_place_holder__a20bf05684f3468f07736ee22e027955f06378f6'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-23 00:52:42.691622 | orchestrator | 2025-05-23 00:52:42.691633 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-05-23 00:52:42.691644 | orchestrator | Friday 23 May 2025 00:45:47 +0000 (0:00:01.977) 0:00:13.519 ************ 2025-05-23 00:52:42.692069 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:52:42.692084 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:52:42.692095 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:52:42.692105 | orchestrator | 2025-05-23 00:52:42.692126 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-05-23 00:52:42.692138 | orchestrator | Friday 23 May 2025 00:45:48 +0000 (0:00:01.511) 0:00:15.031 ************ 2025-05-23 00:52:42.692149 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-05-23 00:52:42.692159 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-05-23 00:52:42.692170 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-05-23 00:52:42.692181 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-05-23 00:52:42.692192 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-05-23 00:52:42.692202 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-05-23 00:52:42.692213 | orchestrator | 2025-05-23 00:52:42.692224 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-05-23 00:52:42.692244 | orchestrator | Friday 23 May 2025 00:45:52 +0000 (0:00:03.519) 0:00:18.550 ************ 2025-05-23 00:52:42.692255 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:52:42.692265 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:52:42.692276 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:52:42.692287 | orchestrator | 2025-05-23 00:52:42.692297 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-05-23 00:52:42.692309 | orchestrator | Friday 23 May 2025 00:45:54 +0000 (0:00:02.303) 0:00:20.854 ************ 2025-05-23 00:52:42.692319 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:52:42.692330 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:52:42.692340 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:52:42.692351 | orchestrator | 2025-05-23 00:52:42.692362 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-05-23 00:52:42.692372 | orchestrator | Friday 23 May 2025 00:45:57 +0000 (0:00:02.921) 0:00:23.776 ************ 2025-05-23 00:52:42.692893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-23 00:52:42.692921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-23 00:52:42.692933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-23 00:52:42.692945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-23 00:52:42.692967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-23 00:52:42.692989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-23 00:52:42.693001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-23 00:52:42.693017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a20bf05684f3468f07736ee22e027955f06378f6', '__omit_place_holder__a20bf05684f3468f07736ee22e027955f06378f6'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-23 00:52:42.693028 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.693040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-23 00:52:42.693052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-23 00:52:42.693063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a20bf05684f3468f07736ee22e027955f06378f6', '__omit_place_holder__a20bf05684f3468f07736ee22e027955f06378f6'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-23 00:52:42.693081 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.693099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a20bf05684f3468f07736ee22e027955f06378f6', '__omit_place_holder__a20bf05684f3468f07736ee22e027955f06378f6'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-23 00:52:42.693111 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.693122 | orchestrator | 2025-05-23 00:52:42.693133 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-05-23 00:52:42.693144 | orchestrator | Friday 23 May 2025 00:45:59 +0000 (0:00:02.018) 0:00:25.794 ************ 2025-05-23 00:52:42.693155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-23 00:52:42.693171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-23 00:52:42.693183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-23 00:52:42.693194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-23 00:52:42.693211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-23 00:52:42.693229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-23 00:52:42.693240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-23 00:52:42.693252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-23 00:52:42.693273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-23 00:52:42.693285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a20bf05684f3468f07736ee22e027955f06378f6', '__omit_place_holder__a20bf05684f3468f07736ee22e027955f06378f6'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-23 00:52:42.693296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a20bf05684f3468f07736ee22e027955f06378f6', '__omit_place_holder__a20bf05684f3468f07736ee22e027955f06378f6'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-23 00:52:42.693807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a20bf05684f3468f07736ee22e027955f06378f6', '__omit_place_holder__a20bf05684f3468f07736ee22e027955f06378f6'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-23 00:52:42.693836 | orchestrator | 2025-05-23 00:52:42.693849 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-05-23 00:52:42.693860 | orchestrator | Friday 23 May 2025 00:46:04 +0000 (0:00:05.210) 0:00:31.005 ************ 2025-05-23 00:52:42.693871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-23 00:52:42.693883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-23 00:52:42.693900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-23 00:52:42.693912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-23 00:52:42.693924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-23 00:52:42.693953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-23 00:52:42.693965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-23 00:52:42.693976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a20bf05684f3468f07736ee22e027955f06378f6', '__omit_place_holder__a20bf05684f3468f07736ee22e027955f06378f6'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-23 00:52:42.693987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-23 00:52:42.694003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a20bf05684f3468f07736ee22e027955f06378f6', '__omit_place_holder__a20bf05684f3468f07736ee22e027955f06378f6'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-23 00:52:42.694014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-23 00:52:42.694095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a20bf05684f3468f07736ee22e027955f06378f6', '__omit_place_holder__a20bf05684f3468f07736ee22e027955f06378f6'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-23 00:52:42.694119 | orchestrator | 2025-05-23 00:52:42.694130 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-05-23 00:52:42.694141 | orchestrator | Friday 23 May 2025 00:46:07 +0000 (0:00:03.144) 0:00:34.150 ************ 2025-05-23 00:52:42.694160 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-23 00:52:42.694647 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-23 00:52:42.694668 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-23 00:52:42.694679 | orchestrator | 2025-05-23 00:52:42.694690 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-05-23 00:52:42.694701 | orchestrator | Friday 23 May 2025 00:46:10 +0000 (0:00:02.576) 0:00:36.726 ************ 2025-05-23 00:52:42.694712 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-23 00:52:42.694722 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-23 00:52:42.694734 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-23 00:52:42.694744 | orchestrator | 2025-05-23 00:52:42.694755 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-05-23 00:52:42.694766 | orchestrator | Friday 23 May 2025 00:46:15 +0000 (0:00:04.481) 0:00:41.208 ************ 2025-05-23 00:52:42.694777 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.694788 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.694798 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.694809 | orchestrator | 2025-05-23 00:52:42.694820 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-05-23 00:52:42.694830 | orchestrator | Friday 23 May 2025 00:46:15 +0000 (0:00:00.821) 0:00:42.030 ************ 2025-05-23 00:52:42.694841 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-23 00:52:42.694853 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-23 00:52:42.694864 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-23 00:52:42.694875 | orchestrator | 2025-05-23 00:52:42.694886 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-05-23 00:52:42.694897 | orchestrator | Friday 23 May 2025 00:46:18 +0000 (0:00:02.298) 0:00:44.328 ************ 2025-05-23 00:52:42.694907 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-23 00:52:42.694926 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-23 00:52:42.694938 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-23 00:52:42.694957 | orchestrator | 2025-05-23 00:52:42.694968 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-05-23 00:52:42.694979 | orchestrator | Friday 23 May 2025 00:46:20 +0000 (0:00:02.670) 0:00:46.998 ************ 2025-05-23 00:52:42.694989 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-05-23 00:52:42.695000 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-05-23 00:52:42.695011 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-05-23 00:52:42.695021 | orchestrator | 2025-05-23 00:52:42.695032 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-05-23 00:52:42.695043 | orchestrator | Friday 23 May 2025 00:46:23 +0000 (0:00:02.999) 0:00:49.997 ************ 2025-05-23 00:52:42.695054 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-05-23 00:52:42.695064 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-05-23 00:52:42.695075 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-05-23 00:52:42.695085 | orchestrator | 2025-05-23 00:52:42.695096 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-05-23 00:52:42.695107 | orchestrator | Friday 23 May 2025 00:46:25 +0000 (0:00:01.572) 0:00:51.570 ************ 2025-05-23 00:52:42.695117 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:52:42.695128 | orchestrator | 2025-05-23 00:52:42.695138 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-05-23 00:52:42.695149 | orchestrator | Friday 23 May 2025 00:46:26 +0000 (0:00:00.775) 0:00:52.346 ************ 2025-05-23 00:52:42.695160 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-23 00:52:42.695181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-23 00:52:42.695194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-23 00:52:42.695267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-23 00:52:42.695790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-23 00:52:42.695807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-23 00:52:42.695818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-23 00:52:42.695838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-23 00:52:42.695851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-23 00:52:42.695862 | orchestrator | 2025-05-23 00:52:42.695872 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-05-23 00:52:42.695883 | orchestrator | Friday 23 May 2025 00:46:29 +0000 (0:00:03.059) 0:00:55.405 ************ 2025-05-23 00:52:42.695895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-23 00:52:42.695917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-23 00:52:42.695928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-23 00:52:42.695939 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.695950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-23 00:52:42.695962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-23 00:52:42.695980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-23 00:52:42.695992 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.696003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-23 00:52:42.696021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-23 00:52:42.696037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-23 00:52:42.696048 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.696059 | orchestrator | 2025-05-23 00:52:42.696070 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-05-23 00:52:42.696081 | orchestrator | Friday 23 May 2025 00:46:29 +0000 (0:00:00.738) 0:00:56.144 ************ 2025-05-23 00:52:42.696092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-23 00:52:42.696104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-23 00:52:42.696120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-23 00:52:42.696132 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.696143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-23 00:52:42.696160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-23 00:52:42.696180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-23 00:52:42.696191 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.696350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-23 00:52:42.696767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-23 00:52:42.696785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-23 00:52:42.696796 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.696807 | orchestrator | 2025-05-23 00:52:42.696818 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-05-23 00:52:42.696829 | orchestrator | Friday 23 May 2025 00:46:31 +0000 (0:00:01.616) 0:00:57.761 ************ 2025-05-23 00:52:42.696849 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-23 00:52:42.696861 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-23 00:52:42.696872 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-23 00:52:42.696893 | orchestrator | 2025-05-23 00:52:42.696904 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-05-23 00:52:42.696914 | orchestrator | Friday 23 May 2025 00:46:33 +0000 (0:00:02.099) 0:00:59.860 ************ 2025-05-23 00:52:42.696925 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-23 00:52:42.696996 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-23 00:52:42.697011 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-23 00:52:42.697021 | orchestrator | 2025-05-23 00:52:42.697032 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-05-23 00:52:42.697043 | orchestrator | Friday 23 May 2025 00:46:35 +0000 (0:00:01.970) 0:01:01.830 ************ 2025-05-23 00:52:42.697053 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-23 00:52:42.697074 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-23 00:52:42.697086 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-23 00:52:42.697096 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-23 00:52:42.697107 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.697118 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-23 00:52:42.697128 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.697139 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-23 00:52:42.697430 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.697651 | orchestrator | 2025-05-23 00:52:42.697699 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-05-23 00:52:42.697741 | orchestrator | Friday 23 May 2025 00:46:37 +0000 (0:00:02.139) 0:01:03.970 ************ 2025-05-23 00:52:42.697761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-23 00:52:42.697774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-23 00:52:42.697785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-23 00:52:42.697816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-23 00:52:42.697828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-23 00:52:42.697840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-23 00:52:42.697866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-23 00:52:42.697878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a20bf05684f3468f07736ee22e027955f06378f6', '__omit_place_holder__a20bf05684f3468f07736ee22e027955f06378f6'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-23 00:52:42.698328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-23 00:52:42.702106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a20bf05684f3468f07736ee22e027955f06378f6', '__omit_place_holder__a20bf05684f3468f07736ee22e027955f06378f6'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-23 00:52:42.702139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-23 00:52:42.702148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a20bf05684f3468f07736ee22e027955f06378f6', '__omit_place_holder__a20bf05684f3468f07736ee22e027955f06378f6'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-23 00:52:42.702156 | orchestrator | 2025-05-23 00:52:42.702164 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-05-23 00:52:42.702173 | orchestrator | Friday 23 May 2025 00:46:41 +0000 (0:00:03.612) 0:01:07.582 ************ 2025-05-23 00:52:42.702181 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:52:42.702188 | orchestrator | 2025-05-23 00:52:42.702196 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-05-23 00:52:42.702204 | orchestrator | Friday 23 May 2025 00:46:42 +0000 (0:00:00.767) 0:01:08.350 ************ 2025-05-23 00:52:42.702220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-23 00:52:42.702229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-23 00:52:42.702250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.702267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.702276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-23 00:52:42.702284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-23 00:52:42.702292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.702300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.702309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-23 00:52:42.702325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-23 00:52:42.702334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.702342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.702350 | orchestrator | 2025-05-23 00:52:42.702358 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-05-23 00:52:42.702366 | orchestrator | Friday 23 May 2025 00:46:46 +0000 (0:00:04.228) 0:01:12.579 ************ 2025-05-23 00:52:42.702394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-23 00:52:42.702403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-23 00:52:42.702416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.702429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-23 00:52:42.702438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.702447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-23 00:52:42.702455 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.702466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.702475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.702487 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.702495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-23 00:52:42.702509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-23 00:52:42.702517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.702548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.702559 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.702567 | orchestrator | 2025-05-23 00:52:42.702575 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-05-23 00:52:42.702582 | orchestrator | Friday 23 May 2025 00:46:47 +0000 (0:00:01.130) 0:01:13.710 ************ 2025-05-23 00:52:42.702591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-23 00:52:42.702600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-23 00:52:42.702608 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.702620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-23 00:52:42.702637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-23 00:52:42.702652 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.702667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-23 00:52:42.702682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-23 00:52:42.702696 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.702710 | orchestrator | 2025-05-23 00:52:42.702724 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-05-23 00:52:42.702737 | orchestrator | Friday 23 May 2025 00:46:48 +0000 (0:00:01.262) 0:01:14.972 ************ 2025-05-23 00:52:42.702749 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:52:42.702763 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:52:42.702776 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:52:42.702788 | orchestrator | 2025-05-23 00:52:42.702801 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-05-23 00:52:42.702814 | orchestrator | Friday 23 May 2025 00:46:50 +0000 (0:00:01.297) 0:01:16.270 ************ 2025-05-23 00:52:42.702828 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:52:42.702841 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:52:42.702855 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:52:42.702869 | orchestrator | 2025-05-23 00:52:42.702883 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-05-23 00:52:42.702897 | orchestrator | Friday 23 May 2025 00:46:52 +0000 (0:00:02.407) 0:01:18.677 ************ 2025-05-23 00:52:42.702911 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:52:42.702925 | orchestrator | 2025-05-23 00:52:42.702939 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-05-23 00:52:42.702951 | orchestrator | Friday 23 May 2025 00:46:53 +0000 (0:00:01.063) 0:01:19.740 ************ 2025-05-23 00:52:42.702978 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-23 00:52:42.702995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.703028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.703043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-23 00:52:42.703058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.703080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.703090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-23 00:52:42.703104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.703116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.703124 | orchestrator | 2025-05-23 00:52:42.703132 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-05-23 00:52:42.703140 | orchestrator | Friday 23 May 2025 00:46:58 +0000 (0:00:04.915) 0:01:24.656 ************ 2025-05-23 00:52:42.703149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-23 00:52:42.703163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.703172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.703184 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.703193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-23 00:52:42.703205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.703213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.703221 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.703234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-23 00:52:42.703243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.703256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.703264 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.703272 | orchestrator | 2025-05-23 00:52:42.703280 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-05-23 00:52:42.703288 | orchestrator | Friday 23 May 2025 00:46:59 +0000 (0:00:00.792) 0:01:25.448 ************ 2025-05-23 00:52:42.703296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-23 00:52:42.703307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-23 00:52:42.703316 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.703324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-23 00:52:42.703332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-23 00:52:42.703340 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.703348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-23 00:52:42.703356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-23 00:52:42.703364 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.703372 | orchestrator | 2025-05-23 00:52:42.703380 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-05-23 00:52:42.703387 | orchestrator | Friday 23 May 2025 00:47:00 +0000 (0:00:01.032) 0:01:26.481 ************ 2025-05-23 00:52:42.703395 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:52:42.703403 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:52:42.703411 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:52:42.703419 | orchestrator | 2025-05-23 00:52:42.703427 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-05-23 00:52:42.703434 | orchestrator | Friday 23 May 2025 00:47:01 +0000 (0:00:01.351) 0:01:27.832 ************ 2025-05-23 00:52:42.703442 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:52:42.703450 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:52:42.703458 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:52:42.703466 | orchestrator | 2025-05-23 00:52:42.703473 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-05-23 00:52:42.703481 | orchestrator | Friday 23 May 2025 00:47:03 +0000 (0:00:01.891) 0:01:29.724 ************ 2025-05-23 00:52:42.703489 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.703501 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.703509 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.703516 | orchestrator | 2025-05-23 00:52:42.703549 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-05-23 00:52:42.703559 | orchestrator | Friday 23 May 2025 00:47:03 +0000 (0:00:00.246) 0:01:29.970 ************ 2025-05-23 00:52:42.703567 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:52:42.703574 | orchestrator | 2025-05-23 00:52:42.703582 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-05-23 00:52:42.703590 | orchestrator | Friday 23 May 2025 00:47:04 +0000 (0:00:00.789) 0:01:30.759 ************ 2025-05-23 00:52:42.703598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-23 00:52:42.703611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-23 00:52:42.703619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-23 00:52:42.703628 | orchestrator | 2025-05-23 00:52:42.703635 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-05-23 00:52:42.703643 | orchestrator | Friday 23 May 2025 00:47:09 +0000 (0:00:04.753) 0:01:35.513 ************ 2025-05-23 00:52:42.703651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-23 00:52:42.703665 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.703678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-23 00:52:42.703687 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.703695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-23 00:52:42.703703 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.703711 | orchestrator | 2025-05-23 00:52:42.703719 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-05-23 00:52:42.703726 | orchestrator | Friday 23 May 2025 00:47:11 +0000 (0:00:01.808) 0:01:37.321 ************ 2025-05-23 00:52:42.703738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-23 00:52:42.703747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-23 00:52:42.703755 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.703763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-23 00:52:42.703772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-23 00:52:42.703784 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.703792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-23 00:52:42.703804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-23 00:52:42.703813 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.703821 | orchestrator | 2025-05-23 00:52:42.703828 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-05-23 00:52:42.703836 | orchestrator | Friday 23 May 2025 00:47:13 +0000 (0:00:01.956) 0:01:39.278 ************ 2025-05-23 00:52:42.703844 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.703852 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.703860 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.703867 | orchestrator | 2025-05-23 00:52:42.703875 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-05-23 00:52:42.703883 | orchestrator | Friday 23 May 2025 00:47:13 +0000 (0:00:00.812) 0:01:40.090 ************ 2025-05-23 00:52:42.703891 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.703899 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.703906 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.703914 | orchestrator | 2025-05-23 00:52:42.703922 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-05-23 00:52:42.703930 | orchestrator | Friday 23 May 2025 00:47:15 +0000 (0:00:01.445) 0:01:41.535 ************ 2025-05-23 00:52:42.703938 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:52:42.703945 | orchestrator | 2025-05-23 00:52:42.703953 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-05-23 00:52:42.703961 | orchestrator | Friday 23 May 2025 00:47:16 +0000 (0:00:01.138) 0:01:42.674 ************ 2025-05-23 00:52:42.703972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-23 00:52:42.703982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.703998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.704011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.704020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-23 00:52:42.704028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.704039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.704052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.704065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-23 00:52:42.704073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.704082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.704093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.704105 | orchestrator | 2025-05-23 00:52:42.704113 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-05-23 00:52:42.704121 | orchestrator | Friday 23 May 2025 00:47:20 +0000 (0:00:04.252) 0:01:46.926 ************ 2025-05-23 00:52:42.704129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-23 00:52:42.704138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.704151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-23 00:52:42.704159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.704171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.704183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.704191 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.704200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.704213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.704221 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.704230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-23 00:52:42.704241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.704254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.704262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.704270 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.704278 | orchestrator | 2025-05-23 00:52:42.704286 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-05-23 00:52:42.704294 | orchestrator | Friday 23 May 2025 00:47:22 +0000 (0:00:01.351) 0:01:48.278 ************ 2025-05-23 00:52:42.704302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-23 00:52:42.705098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-23 00:52:42.705166 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.705182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-23 00:52:42.705194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-23 00:52:42.705205 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.705216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-23 00:52:42.705227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-23 00:52:42.705238 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.705249 | orchestrator | 2025-05-23 00:52:42.705261 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-05-23 00:52:42.705289 | orchestrator | Friday 23 May 2025 00:47:23 +0000 (0:00:01.172) 0:01:49.450 ************ 2025-05-23 00:52:42.705300 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:52:42.705311 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:52:42.705322 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:52:42.705333 | orchestrator | 2025-05-23 00:52:42.705344 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-05-23 00:52:42.705355 | orchestrator | Friday 23 May 2025 00:47:24 +0000 (0:00:01.709) 0:01:51.160 ************ 2025-05-23 00:52:42.705365 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:52:42.705376 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:52:42.705387 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:52:42.705398 | orchestrator | 2025-05-23 00:52:42.705408 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-05-23 00:52:42.705419 | orchestrator | Friday 23 May 2025 00:47:27 +0000 (0:00:02.695) 0:01:53.856 ************ 2025-05-23 00:52:42.705430 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.705441 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.705452 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.705462 | orchestrator | 2025-05-23 00:52:42.705480 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-05-23 00:52:42.705491 | orchestrator | Friday 23 May 2025 00:47:28 +0000 (0:00:00.443) 0:01:54.300 ************ 2025-05-23 00:52:42.705502 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.705512 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.705523 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.705581 | orchestrator | 2025-05-23 00:52:42.705592 | orchestrator | TASK [include_role : designate] ************************************************ 2025-05-23 00:52:42.705603 | orchestrator | Friday 23 May 2025 00:47:28 +0000 (0:00:00.498) 0:01:54.799 ************ 2025-05-23 00:52:42.705614 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:52:42.705624 | orchestrator | 2025-05-23 00:52:42.705635 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-05-23 00:52:42.705646 | orchestrator | Friday 23 May 2025 00:47:30 +0000 (0:00:01.467) 0:01:56.267 ************ 2025-05-23 00:52:42.705659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-23 00:52:42.705689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-23 00:52:42.705702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.705722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.705733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.705749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.705761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.705772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-23 00:52:42.705791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-23 00:52:42.705809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.705821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.705836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.705847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.705859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.705877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-23 00:52:42.705894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-23 00:52:42.705906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.705921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.705932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.705944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.705955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.705971 | orchestrator | 2025-05-23 00:52:42.705988 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-05-23 00:52:42.706000 | orchestrator | Friday 23 May 2025 00:47:35 +0000 (0:00:05.366) 0:02:01.634 ************ 2025-05-23 00:52:42.706012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-23 00:52:42.706078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-23 00:52:42.706090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.706102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.706113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.706130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.706148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.706160 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.706172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-23 00:52:42.706207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-23 00:52:42.706220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.706231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.706248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.706273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.706285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.706297 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.706312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-23 00:52:42.706324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-23 00:52:42.706335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.706358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.706375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.706388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.706399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.706410 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.706421 | orchestrator | 2025-05-23 00:52:42.706432 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-05-23 00:52:42.706443 | orchestrator | Friday 23 May 2025 00:47:36 +0000 (0:00:00.909) 0:02:02.543 ************ 2025-05-23 00:52:42.706454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-23 00:52:42.706469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-23 00:52:42.706481 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.706492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-23 00:52:42.706503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-23 00:52:42.706513 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.706524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-23 00:52:42.706597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-23 00:52:42.706617 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.706650 | orchestrator | 2025-05-23 00:52:42.706682 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-05-23 00:52:42.706695 | orchestrator | Friday 23 May 2025 00:47:37 +0000 (0:00:01.393) 0:02:03.937 ************ 2025-05-23 00:52:42.706704 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:52:42.706714 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:52:42.706723 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:52:42.706733 | orchestrator | 2025-05-23 00:52:42.706742 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-05-23 00:52:42.706752 | orchestrator | Friday 23 May 2025 00:47:38 +0000 (0:00:01.084) 0:02:05.021 ************ 2025-05-23 00:52:42.706761 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:52:42.706771 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:52:42.706780 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:52:42.706790 | orchestrator | 2025-05-23 00:52:42.706799 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-05-23 00:52:42.706809 | orchestrator | Friday 23 May 2025 00:47:40 +0000 (0:00:01.856) 0:02:06.878 ************ 2025-05-23 00:52:42.706818 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.706828 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.706837 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.706847 | orchestrator | 2025-05-23 00:52:42.706856 | orchestrator | TASK [include_role : glance] *************************************************** 2025-05-23 00:52:42.706872 | orchestrator | Friday 23 May 2025 00:47:41 +0000 (0:00:00.328) 0:02:07.207 ************ 2025-05-23 00:52:42.706882 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:52:42.706892 | orchestrator | 2025-05-23 00:52:42.706901 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-05-23 00:52:42.706911 | orchestrator | Friday 23 May 2025 00:47:41 +0000 (0:00:00.813) 0:02:08.020 ************ 2025-05-23 00:52:42.706923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-23 00:52:42.706946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-23 00:52:42.706966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-23 00:52:42.706982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-23 00:52:42.707006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-23 00:52:42.707022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-23 00:52:42.707038 | orchestrator | 2025-05-23 00:52:42.707048 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-05-23 00:52:42.707058 | orchestrator | Friday 23 May 2025 00:47:46 +0000 (0:00:05.049) 0:02:13.069 ************ 2025-05-23 00:52:42.707075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-23 00:52:42.707090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-23 00:52:42.707107 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.707124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-23 00:52:42.707140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-23 00:52:42.707157 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.707168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-23 00:52:42.707186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-23 00:52:42.707202 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.707212 | orchestrator | 2025-05-23 00:52:42.707221 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-05-23 00:52:42.707231 | orchestrator | Friday 23 May 2025 00:47:52 +0000 (0:00:05.167) 0:02:18.237 ************ 2025-05-23 00:52:42.707245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-23 00:52:42.707255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-23 00:52:42.707265 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.707275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-23 00:52:42.707291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-23 00:52:42.707302 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.707312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-23 00:52:42.707322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-23 00:52:42.707332 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.707349 | orchestrator | 2025-05-23 00:52:42.707359 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-05-23 00:52:42.707369 | orchestrator | Friday 23 May 2025 00:47:56 +0000 (0:00:04.637) 0:02:22.875 ************ 2025-05-23 00:52:42.707379 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:52:42.707389 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:52:42.707398 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:52:42.707408 | orchestrator | 2025-05-23 00:52:42.707417 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-05-23 00:52:42.707427 | orchestrator | Friday 23 May 2025 00:47:57 +0000 (0:00:01.108) 0:02:23.984 ************ 2025-05-23 00:52:42.707436 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:52:42.707445 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:52:42.707455 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:52:42.707464 | orchestrator | 2025-05-23 00:52:42.707474 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-05-23 00:52:42.707483 | orchestrator | Friday 23 May 2025 00:47:59 +0000 (0:00:01.929) 0:02:25.914 ************ 2025-05-23 00:52:42.707493 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.707502 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.707512 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.707521 | orchestrator | 2025-05-23 00:52:42.707550 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-05-23 00:52:42.707560 | orchestrator | Friday 23 May 2025 00:48:00 +0000 (0:00:00.380) 0:02:26.295 ************ 2025-05-23 00:52:42.707569 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:52:42.707579 | orchestrator | 2025-05-23 00:52:42.707588 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-05-23 00:52:42.707598 | orchestrator | Friday 23 May 2025 00:48:01 +0000 (0:00:01.073) 0:02:27.368 ************ 2025-05-23 00:52:42.707608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-23 00:52:42.707619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-23 00:52:42.707635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-23 00:52:42.707651 | orchestrator | 2025-05-23 00:52:42.707661 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-05-23 00:52:42.707670 | orchestrator | Friday 23 May 2025 00:48:05 +0000 (0:00:04.727) 0:02:32.096 ************ 2025-05-23 00:52:42.707680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-23 00:52:42.707690 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.707700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-23 00:52:42.707711 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.707724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-23 00:52:42.707734 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.707744 | orchestrator | 2025-05-23 00:52:42.707754 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-05-23 00:52:42.707763 | orchestrator | Friday 23 May 2025 00:48:06 +0000 (0:00:00.460) 0:02:32.557 ************ 2025-05-23 00:52:42.707773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-23 00:52:42.707782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-23 00:52:42.707792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-23 00:52:42.707802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-23 00:52:42.707812 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.707821 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.707831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-23 00:52:42.707850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-23 00:52:42.707860 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.707870 | orchestrator | 2025-05-23 00:52:42.707879 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-05-23 00:52:42.707889 | orchestrator | Friday 23 May 2025 00:48:07 +0000 (0:00:01.014) 0:02:33.571 ************ 2025-05-23 00:52:42.707898 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:52:42.707908 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:52:42.707917 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:52:42.707927 | orchestrator | 2025-05-23 00:52:42.707936 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-05-23 00:52:42.707946 | orchestrator | Friday 23 May 2025 00:48:08 +0000 (0:00:01.150) 0:02:34.722 ************ 2025-05-23 00:52:42.707955 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:52:42.707965 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:52:42.707974 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:52:42.707984 | orchestrator | 2025-05-23 00:52:42.707993 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-05-23 00:52:42.708003 | orchestrator | Friday 23 May 2025 00:48:10 +0000 (0:00:02.118) 0:02:36.840 ************ 2025-05-23 00:52:42.708012 | orchestrator | included: heat for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:52:42.708021 | orchestrator | 2025-05-23 00:52:42.708031 | orchestrator | TASK [haproxy-config : Copying over heat haproxy config] *********************** 2025-05-23 00:52:42.708041 | orchestrator | Friday 23 May 2025 00:48:11 +0000 (0:00:00.941) 0:02:37.782 ************ 2025-05-23 00:52:42.708051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-05-23 00:52:42.708065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-05-23 00:52:42.708076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-05-23 00:52:42.708096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-05-23 00:52:42.708107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.708118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-05-23 00:52:42.708131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.708142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-05-23 00:52:42.708156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.708166 | orchestrator | 2025-05-23 00:52:42.708181 | orchestrator | TASK [haproxy-config : Add configuration for heat when using single external frontend] *** 2025-05-23 00:52:42.708191 | orchestrator | Friday 23 May 2025 00:48:19 +0000 (0:00:08.128) 0:02:45.910 ************ 2025-05-23 00:52:42.708201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-05-23 00:52:42.708212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-05-23 00:52:42.708225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.708236 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.708246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-05-23 00:52:42.708266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-05-23 00:52:42.708277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.708287 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.708297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-05-23 00:52:42.708311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-05-23 00:52:42.708326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.708336 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.708346 | orchestrator | 2025-05-23 00:52:42.708356 | orchestrator | TASK [haproxy-config : Configuring firewall for heat] ************************** 2025-05-23 00:52:42.708366 | orchestrator | Friday 23 May 2025 00:48:20 +0000 (0:00:01.117) 0:02:47.028 ************ 2025-05-23 00:52:42.708375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-23 00:52:42.708386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-23 00:52:42.708396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-23 00:52:42.708411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-23 00:52:42.708422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-23 00:52:42.708432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-23 00:52:42.708442 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.708451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-23 00:52:42.708461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-23 00:52:42.708471 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.708481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-23 00:52:42.708491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-23 00:52:42.708500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-23 00:52:42.708510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-23 00:52:42.708524 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.708556 | orchestrator | 2025-05-23 00:52:42.708566 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL users config] *************** 2025-05-23 00:52:42.708576 | orchestrator | Friday 23 May 2025 00:48:22 +0000 (0:00:01.760) 0:02:48.788 ************ 2025-05-23 00:52:42.708585 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:52:42.708595 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:52:42.708604 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:52:42.708614 | orchestrator | 2025-05-23 00:52:42.708623 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL rules config] *************** 2025-05-23 00:52:42.708633 | orchestrator | Friday 23 May 2025 00:48:23 +0000 (0:00:01.316) 0:02:50.105 ************ 2025-05-23 00:52:42.708642 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:52:42.708652 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:52:42.708661 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:52:42.708670 | orchestrator | 2025-05-23 00:52:42.708680 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-05-23 00:52:42.708689 | orchestrator | Friday 23 May 2025 00:48:26 +0000 (0:00:02.119) 0:02:52.224 ************ 2025-05-23 00:52:42.708699 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:52:42.708708 | orchestrator | 2025-05-23 00:52:42.708718 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-05-23 00:52:42.708727 | orchestrator | Friday 23 May 2025 00:48:27 +0000 (0:00:01.022) 0:02:53.247 ************ 2025-05-23 00:52:42.708745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-23 00:52:42.708765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-23 00:52:42.708905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-23 00:52:42.708923 | orchestrator | 2025-05-23 00:52:42.708933 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-05-23 00:52:42.708949 | orchestrator | Friday 23 May 2025 00:48:30 +0000 (0:00:03.887) 0:02:57.134 ************ 2025-05-23 00:52:42.708964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-23 00:52:42.708976 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.708994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-23 00:52:42.709010 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.709025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-23 00:52:42.709036 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.709046 | orchestrator | 2025-05-23 00:52:42.709060 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-05-23 00:52:42.709070 | orchestrator | Friday 23 May 2025 00:48:32 +0000 (0:00:01.140) 0:02:58.275 ************ 2025-05-23 00:52:42.709080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-23 00:52:42.709091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-23 00:52:42.709101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-23 00:52:42.709118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-23 00:52:42.709128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-23 00:52:42.709138 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.709148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-23 00:52:42.709161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-23 00:52:42.709172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-23 00:52:42.709182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-23 00:52:42.709191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-23 00:52:42.709201 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.709211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-23 00:52:42.709221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-23 00:52:42.709236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-23 00:52:42.709246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-23 00:52:42.709256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-23 00:52:42.709271 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.709280 | orchestrator | 2025-05-23 00:52:42.709290 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-05-23 00:52:42.709300 | orchestrator | Friday 23 May 2025 00:48:33 +0000 (0:00:01.284) 0:02:59.559 ************ 2025-05-23 00:52:42.709309 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:52:42.709319 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:52:42.709328 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:52:42.709338 | orchestrator | 2025-05-23 00:52:42.709347 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-05-23 00:52:42.709357 | orchestrator | Friday 23 May 2025 00:48:34 +0000 (0:00:01.384) 0:03:00.944 ************ 2025-05-23 00:52:42.709367 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:52:42.709376 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:52:42.709386 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:52:42.709396 | orchestrator | 2025-05-23 00:52:42.709405 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-05-23 00:52:42.709415 | orchestrator | Friday 23 May 2025 00:48:36 +0000 (0:00:02.231) 0:03:03.176 ************ 2025-05-23 00:52:42.709424 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.709434 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.709443 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.709453 | orchestrator | 2025-05-23 00:52:42.709462 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-05-23 00:52:42.709472 | orchestrator | Friday 23 May 2025 00:48:37 +0000 (0:00:00.465) 0:03:03.642 ************ 2025-05-23 00:52:42.709481 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.709491 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.709500 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.709511 | orchestrator | 2025-05-23 00:52:42.709522 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-05-23 00:52:42.709551 | orchestrator | Friday 23 May 2025 00:48:37 +0000 (0:00:00.269) 0:03:03.911 ************ 2025-05-23 00:52:42.709562 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:52:42.709572 | orchestrator | 2025-05-23 00:52:42.709587 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-05-23 00:52:42.709598 | orchestrator | Friday 23 May 2025 00:48:38 +0000 (0:00:01.271) 0:03:05.183 ************ 2025-05-23 00:52:42.709610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-23 00:52:42.709623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-23 00:52:42.709646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-23 00:52:42.709659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-23 00:52:42.709672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-23 00:52:42.709684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-23 00:52:42.709711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-23 00:52:42.709733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-23 00:52:42.709745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-23 00:52:42.709756 | orchestrator | 2025-05-23 00:52:42.709767 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-05-23 00:52:42.709779 | orchestrator | Friday 23 May 2025 00:48:43 +0000 (0:00:04.055) 0:03:09.238 ************ 2025-05-23 00:52:42.709794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-23 00:52:42.709807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-23 00:52:42.709818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-23 00:52:42.709833 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.709851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-23 00:52:42.709863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-23 00:52:42.709874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-23 00:52:42.709883 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.709901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-23 00:52:42.709912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-23 00:52:42.709927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-23 00:52:42.709937 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.709947 | orchestrator | 2025-05-23 00:52:42.709957 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-05-23 00:52:42.709966 | orchestrator | Friday 23 May 2025 00:48:43 +0000 (0:00:00.770) 0:03:10.009 ************ 2025-05-23 00:52:42.709981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-23 00:52:42.709992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-23 00:52:42.710002 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.710012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-23 00:52:42.710082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-23 00:52:42.710093 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.710103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-23 00:52:42.710113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-23 00:52:42.710123 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.710132 | orchestrator | 2025-05-23 00:52:42.710142 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-05-23 00:52:42.710152 | orchestrator | Friday 23 May 2025 00:48:45 +0000 (0:00:01.203) 0:03:11.212 ************ 2025-05-23 00:52:42.710161 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:52:42.710171 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:52:42.710180 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:52:42.710189 | orchestrator | 2025-05-23 00:52:42.710203 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-05-23 00:52:42.710213 | orchestrator | Friday 23 May 2025 00:48:46 +0000 (0:00:01.336) 0:03:12.548 ************ 2025-05-23 00:52:42.710223 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:52:42.710232 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:52:42.710241 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:52:42.710251 | orchestrator | 2025-05-23 00:52:42.710260 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-05-23 00:52:42.710277 | orchestrator | Friday 23 May 2025 00:48:48 +0000 (0:00:02.385) 0:03:14.933 ************ 2025-05-23 00:52:42.710286 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.710296 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.710305 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.710315 | orchestrator | 2025-05-23 00:52:42.710324 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-05-23 00:52:42.710334 | orchestrator | Friday 23 May 2025 00:48:49 +0000 (0:00:00.299) 0:03:15.233 ************ 2025-05-23 00:52:42.710343 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:52:42.710352 | orchestrator | 2025-05-23 00:52:42.710362 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-05-23 00:52:42.710371 | orchestrator | Friday 23 May 2025 00:48:50 +0000 (0:00:01.259) 0:03:16.493 ************ 2025-05-23 00:52:42.710382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-23 00:52:42.710404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.710415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-23 00:52:42.710429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.710445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-23 00:52:42.710455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.710466 | orchestrator | 2025-05-23 00:52:42.710475 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-05-23 00:52:42.710485 | orchestrator | Friday 23 May 2025 00:48:54 +0000 (0:00:04.434) 0:03:20.928 ************ 2025-05-23 00:52:42.710501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-23 00:52:42.710513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.710522 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.710663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-23 00:52:42.710679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.710689 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.710706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-23 00:52:42.710716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.710726 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.710736 | orchestrator | 2025-05-23 00:52:42.710746 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-05-23 00:52:42.710755 | orchestrator | Friday 23 May 2025 00:48:55 +0000 (0:00:01.020) 0:03:21.948 ************ 2025-05-23 00:52:42.710765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-23 00:52:42.710775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-23 00:52:42.710790 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.710799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-23 00:52:42.710807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-23 00:52:42.710815 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.710826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-23 00:52:42.710835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-23 00:52:42.710843 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.710851 | orchestrator | 2025-05-23 00:52:42.710858 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-05-23 00:52:42.710866 | orchestrator | Friday 23 May 2025 00:48:57 +0000 (0:00:01.370) 0:03:23.319 ************ 2025-05-23 00:52:42.710874 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:52:42.710882 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:52:42.710890 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:52:42.710898 | orchestrator | 2025-05-23 00:52:42.710906 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-05-23 00:52:42.710914 | orchestrator | Friday 23 May 2025 00:48:58 +0000 (0:00:01.334) 0:03:24.653 ************ 2025-05-23 00:52:42.710922 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:52:42.710929 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:52:42.710937 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:52:42.710945 | orchestrator | 2025-05-23 00:52:42.710953 | orchestrator | TASK [include_role : manila] *************************************************** 2025-05-23 00:52:42.710960 | orchestrator | Friday 23 May 2025 00:49:00 +0000 (0:00:02.126) 0:03:26.779 ************ 2025-05-23 00:52:42.710968 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:52:42.710976 | orchestrator | 2025-05-23 00:52:42.710984 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-05-23 00:52:42.710991 | orchestrator | Friday 23 May 2025 00:49:01 +0000 (0:00:01.269) 0:03:28.049 ************ 2025-05-23 00:52:42.711003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-23 00:52:42.711013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-23 00:52:42.711051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.711063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.711072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.711081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.711089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.711101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.711117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-23 00:52:42.711128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.711137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.711145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.711153 | orchestrator | 2025-05-23 00:52:42.711161 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-05-23 00:52:42.711169 | orchestrator | Friday 23 May 2025 00:49:06 +0000 (0:00:04.797) 0:03:32.847 ************ 2025-05-23 00:52:42.711181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-23 00:52:42.711194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.711203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.711216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.711225 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.711233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-23 00:52:42.711242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.711254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.711267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.711275 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.711283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-23 00:52:42.711295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.711303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.711312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.711320 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.711328 | orchestrator | 2025-05-23 00:52:42.711336 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-05-23 00:52:42.711348 | orchestrator | Friday 23 May 2025 00:49:07 +0000 (0:00:00.724) 0:03:33.571 ************ 2025-05-23 00:52:42.711356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-23 00:52:42.711368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-23 00:52:42.711377 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.711386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-23 00:52:42.711394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-23 00:52:42.711402 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.711410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-23 00:52:42.711418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-23 00:52:42.711426 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.711434 | orchestrator | 2025-05-23 00:52:42.711442 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-05-23 00:52:42.711450 | orchestrator | Friday 23 May 2025 00:49:08 +0000 (0:00:00.970) 0:03:34.542 ************ 2025-05-23 00:52:42.711459 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:52:42.711467 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:52:42.711475 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:52:42.711482 | orchestrator | 2025-05-23 00:52:42.711490 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-05-23 00:52:42.711498 | orchestrator | Friday 23 May 2025 00:49:09 +0000 (0:00:01.273) 0:03:35.815 ************ 2025-05-23 00:52:42.711506 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:52:42.711514 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:52:42.711522 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:52:42.711545 | orchestrator | 2025-05-23 00:52:42.711553 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-05-23 00:52:42.711561 | orchestrator | Friday 23 May 2025 00:49:11 +0000 (0:00:02.100) 0:03:37.916 ************ 2025-05-23 00:52:42.711568 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:52:42.711576 | orchestrator | 2025-05-23 00:52:42.711584 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-05-23 00:52:42.711593 | orchestrator | Friday 23 May 2025 00:49:13 +0000 (0:00:01.362) 0:03:39.278 ************ 2025-05-23 00:52:42.711604 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-23 00:52:42.711612 | orchestrator | 2025-05-23 00:52:42.711620 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-05-23 00:52:42.711628 | orchestrator | Friday 23 May 2025 00:49:16 +0000 (0:00:03.079) 0:03:42.358 ************ 2025-05-23 00:52:42.711637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-23 00:52:42.711656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-23 00:52:42.711665 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.711677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-23 00:52:42.711687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-23 00:52:42.711699 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.711713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-23 00:52:42.711723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-23 00:52:42.711731 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.711739 | orchestrator | 2025-05-23 00:52:42.711747 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-05-23 00:52:42.711755 | orchestrator | Friday 23 May 2025 00:49:19 +0000 (0:00:03.481) 0:03:45.839 ************ 2025-05-23 00:52:42.711767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-23 00:52:42.711783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-23 00:52:42.711792 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.711801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-23 00:52:42.711813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-23 00:52:42.711826 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.711840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-23 00:52:42.711849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-23 00:52:42.711857 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.711866 | orchestrator | 2025-05-23 00:52:42.711873 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-05-23 00:52:42.711882 | orchestrator | Friday 23 May 2025 00:49:22 +0000 (0:00:03.107) 0:03:48.947 ************ 2025-05-23 00:52:42.711890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-23 00:52:42.711901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-23 00:52:42.711914 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.711922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-23 00:52:42.711930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-23 00:52:42.711938 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.711950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-23 00:52:42.711959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-23 00:52:42.711968 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.711976 | orchestrator | 2025-05-23 00:52:42.711983 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-05-23 00:52:42.711991 | orchestrator | Friday 23 May 2025 00:49:25 +0000 (0:00:02.837) 0:03:51.785 ************ 2025-05-23 00:52:42.711999 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:52:42.712007 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:52:42.712015 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:52:42.712023 | orchestrator | 2025-05-23 00:52:42.712031 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-05-23 00:52:42.712039 | orchestrator | Friday 23 May 2025 00:49:27 +0000 (0:00:01.810) 0:03:53.595 ************ 2025-05-23 00:52:42.712046 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.712054 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.712062 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.712070 | orchestrator | 2025-05-23 00:52:42.712082 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-05-23 00:52:42.712090 | orchestrator | Friday 23 May 2025 00:49:28 +0000 (0:00:01.267) 0:03:54.862 ************ 2025-05-23 00:52:42.712098 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.712106 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.712114 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.712122 | orchestrator | 2025-05-23 00:52:42.712129 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-05-23 00:52:42.712137 | orchestrator | Friday 23 May 2025 00:49:29 +0000 (0:00:00.377) 0:03:55.240 ************ 2025-05-23 00:52:42.712145 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:52:42.712153 | orchestrator | 2025-05-23 00:52:42.712161 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-05-23 00:52:42.712174 | orchestrator | Friday 23 May 2025 00:49:30 +0000 (0:00:01.066) 0:03:56.307 ************ 2025-05-23 00:52:42.712183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-23 00:52:42.712192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-23 00:52:42.712205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-23 00:52:42.712214 | orchestrator | 2025-05-23 00:52:42.712223 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-05-23 00:52:42.712231 | orchestrator | Friday 23 May 2025 00:49:31 +0000 (0:00:01.615) 0:03:57.922 ************ 2025-05-23 00:52:42.712239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-23 00:52:42.712254 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.712266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-23 00:52:42.712274 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.712283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-23 00:52:42.712291 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.712299 | orchestrator | 2025-05-23 00:52:42.712307 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-05-23 00:52:42.712315 | orchestrator | Friday 23 May 2025 00:49:32 +0000 (0:00:00.331) 0:03:58.254 ************ 2025-05-23 00:52:42.712323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-23 00:52:42.712331 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.712339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-23 00:52:42.712347 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.712356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-23 00:52:42.712364 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.712371 | orchestrator | 2025-05-23 00:52:42.712383 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-05-23 00:52:42.712392 | orchestrator | Friday 23 May 2025 00:49:32 +0000 (0:00:00.700) 0:03:58.954 ************ 2025-05-23 00:52:42.712400 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.712408 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.712415 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.712423 | orchestrator | 2025-05-23 00:52:42.712431 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-05-23 00:52:42.712444 | orchestrator | Friday 23 May 2025 00:49:33 +0000 (0:00:00.586) 0:03:59.541 ************ 2025-05-23 00:52:42.712451 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.712459 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.712467 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.712475 | orchestrator | 2025-05-23 00:52:42.712483 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-05-23 00:52:42.712490 | orchestrator | Friday 23 May 2025 00:49:34 +0000 (0:00:01.215) 0:04:00.756 ************ 2025-05-23 00:52:42.712498 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.712506 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.712513 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.712521 | orchestrator | 2025-05-23 00:52:42.712540 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-05-23 00:52:42.712548 | orchestrator | Friday 23 May 2025 00:49:34 +0000 (0:00:00.300) 0:04:01.057 ************ 2025-05-23 00:52:42.712555 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:52:42.712563 | orchestrator | 2025-05-23 00:52:42.712571 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-05-23 00:52:42.712579 | orchestrator | Friday 23 May 2025 00:49:36 +0000 (0:00:01.518) 0:04:02.576 ************ 2025-05-23 00:52:42.712590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-23 00:52:42.712599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.712608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.712622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.712635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 00:52:42.712644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.712657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 00:52:42.712666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 00:52:42.712675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.712813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 00:52:42.712833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.712841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-23 00:52:42.712853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:52:42.712861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 00:52:42.712869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.712912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.712927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 00:52:42.712936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.712948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 00:52:42.712956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.712964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.713027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 00:52:42.713039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.713047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 00:52:42.713059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 00:52:42.713068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.713076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 00:52:42.713135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.713148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:52:42.713156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 00:52:42.713165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.713177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-23 00:52:42.713186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 00:52:42.713249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 00:52:42.713262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.713270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.713278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.713287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.713360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 00:52:42.713387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.713396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 00:52:42.713404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 00:52:42.713451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.713462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 00:52:42.713476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.713585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:52:42.713600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 00:52:42.713608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.713621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 00:52:42.713631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 00:52:42.713645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.713653 | orchestrator | 2025-05-23 00:52:42.713661 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-05-23 00:52:42.713669 | orchestrator | Friday 23 May 2025 00:49:41 +0000 (0:00:05.129) 0:04:07.705 ************ 2025-05-23 00:52:42.713727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-23 00:52:42.713739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.713751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.713765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.713774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 00:52:42.713831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.713843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 00:52:42.713852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 00:52:42.713860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.713877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 00:52:42.713886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.713894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:52:42.713952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-23 00:52:42.713963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 00:52:42.713974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.713985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.713992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.714198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.714285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 00:52:42.714305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 00:52:42.714354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 00:52:42.714368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-23 00:52:42.714582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.714606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.714618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.714642 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.714663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 00:52:42.714675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.714687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 00:52:42.714699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.714798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.714815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 00:52:42.714840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 00:52:42.714852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.714864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.714876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 00:52:42.714955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:52:42.714971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 00:52:42.714982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 00:52:42.715007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.715019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.715030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 00:52:42.715117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 00:52:42.715137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.715156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:52:42.715172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 00:52:42.715184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 00:52:42.715196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.715207 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.715286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.715302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 00:52:42.715327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 00:52:42.715344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.715355 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.715367 | orchestrator | 2025-05-23 00:52:42.715379 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-05-23 00:52:42.715390 | orchestrator | Friday 23 May 2025 00:49:43 +0000 (0:00:01.971) 0:04:09.677 ************ 2025-05-23 00:52:42.715401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-23 00:52:42.715413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-23 00:52:42.715424 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.715435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-23 00:52:42.715446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-23 00:52:42.715457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-23 00:52:42.715467 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.715478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-23 00:52:42.715489 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.715500 | orchestrator | 2025-05-23 00:52:42.715511 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-05-23 00:52:42.715576 | orchestrator | Friday 23 May 2025 00:49:45 +0000 (0:00:01.991) 0:04:11.668 ************ 2025-05-23 00:52:42.715590 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:52:42.715601 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:52:42.715612 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:52:42.715630 | orchestrator | 2025-05-23 00:52:42.715641 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-05-23 00:52:42.715652 | orchestrator | Friday 23 May 2025 00:49:46 +0000 (0:00:01.480) 0:04:13.148 ************ 2025-05-23 00:52:42.715662 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:52:42.715673 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:52:42.715684 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:52:42.715694 | orchestrator | 2025-05-23 00:52:42.715705 | orchestrator | TASK [include_role : placement] ************************************************ 2025-05-23 00:52:42.715716 | orchestrator | Friday 23 May 2025 00:49:49 +0000 (0:00:02.325) 0:04:15.474 ************ 2025-05-23 00:52:42.715726 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:52:42.715737 | orchestrator | 2025-05-23 00:52:42.715747 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-05-23 00:52:42.715758 | orchestrator | Friday 23 May 2025 00:49:50 +0000 (0:00:01.510) 0:04:16.984 ************ 2025-05-23 00:52:42.715769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-23 00:52:42.715787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-23 00:52:42.715799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-23 00:52:42.715810 | orchestrator | 2025-05-23 00:52:42.715821 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-05-23 00:52:42.715832 | orchestrator | Friday 23 May 2025 00:49:54 +0000 (0:00:04.115) 0:04:21.099 ************ 2025-05-23 00:52:42.715879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-23 00:52:42.715893 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.715904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-23 00:52:42.715918 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.715935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-23 00:52:42.715948 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.715960 | orchestrator | 2025-05-23 00:52:42.715972 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-05-23 00:52:42.715985 | orchestrator | Friday 23 May 2025 00:49:55 +0000 (0:00:00.598) 0:04:21.698 ************ 2025-05-23 00:52:42.715997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-23 00:52:42.716010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-23 00:52:42.716024 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.716036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-23 00:52:42.716055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-23 00:52:42.716068 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.716080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-23 00:52:42.716121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-23 00:52:42.716136 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.716148 | orchestrator | 2025-05-23 00:52:42.716160 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-05-23 00:52:42.716172 | orchestrator | Friday 23 May 2025 00:49:56 +0000 (0:00:00.803) 0:04:22.501 ************ 2025-05-23 00:52:42.716185 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:52:42.716198 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:52:42.716210 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:52:42.716222 | orchestrator | 2025-05-23 00:52:42.716234 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-05-23 00:52:42.716246 | orchestrator | Friday 23 May 2025 00:49:57 +0000 (0:00:01.259) 0:04:23.760 ************ 2025-05-23 00:52:42.716259 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:52:42.716271 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:52:42.716282 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:52:42.716293 | orchestrator | 2025-05-23 00:52:42.716303 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-05-23 00:52:42.716314 | orchestrator | Friday 23 May 2025 00:49:59 +0000 (0:00:02.412) 0:04:26.173 ************ 2025-05-23 00:52:42.716324 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:52:42.716335 | orchestrator | 2025-05-23 00:52:42.716345 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-05-23 00:52:42.716356 | orchestrator | Friday 23 May 2025 00:50:01 +0000 (0:00:01.435) 0:04:27.609 ************ 2025-05-23 00:52:42.716373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-23 00:52:42.716386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.716404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.716446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-23 00:52:42.716460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-23 00:52:42.716477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.716494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.716506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.716563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.716577 | orchestrator | 2025-05-23 00:52:42.716588 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-05-23 00:52:42.716599 | orchestrator | Friday 23 May 2025 00:50:06 +0000 (0:00:05.320) 0:04:32.929 ************ 2025-05-23 00:52:42.716611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-23 00:52:42.716628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.716645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.716657 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.716697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-23 00:52:42.716711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.716723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.716734 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.716750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-23 00:52:42.716769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.716781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.716792 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.716803 | orchestrator | 2025-05-23 00:52:42.716814 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-05-23 00:52:42.716825 | orchestrator | Friday 23 May 2025 00:50:07 +0000 (0:00:00.846) 0:04:33.776 ************ 2025-05-23 00:52:42.716863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-23 00:52:42.716876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-23 00:52:42.716888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-23 00:52:42.716899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-23 00:52:42.716910 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.716921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-23 00:52:42.716931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-23 00:52:42.716942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-23 00:52:42.716953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-23 00:52:42.716973 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.716984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-23 00:52:42.717000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-23 00:52:42.717011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-23 00:52:42.717022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-23 00:52:42.717033 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.717044 | orchestrator | 2025-05-23 00:52:42.717054 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-05-23 00:52:42.717065 | orchestrator | Friday 23 May 2025 00:50:08 +0000 (0:00:01.271) 0:04:35.047 ************ 2025-05-23 00:52:42.717076 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:52:42.717086 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:52:42.717096 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:52:42.717107 | orchestrator | 2025-05-23 00:52:42.717117 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-05-23 00:52:42.717128 | orchestrator | Friday 23 May 2025 00:50:10 +0000 (0:00:01.420) 0:04:36.467 ************ 2025-05-23 00:52:42.717138 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:52:42.717148 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:52:42.717159 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:52:42.717169 | orchestrator | 2025-05-23 00:52:42.717180 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-05-23 00:52:42.717190 | orchestrator | Friday 23 May 2025 00:50:12 +0000 (0:00:02.391) 0:04:38.859 ************ 2025-05-23 00:52:42.717201 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:52:42.717211 | orchestrator | 2025-05-23 00:52:42.717221 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-05-23 00:52:42.717232 | orchestrator | Friday 23 May 2025 00:50:14 +0000 (0:00:01.395) 0:04:40.254 ************ 2025-05-23 00:52:42.717242 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-05-23 00:52:42.717253 | orchestrator | 2025-05-23 00:52:42.717264 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-05-23 00:52:42.717275 | orchestrator | Friday 23 May 2025 00:50:15 +0000 (0:00:01.639) 0:04:41.893 ************ 2025-05-23 00:52:42.717313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-23 00:52:42.717328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-23 00:52:42.717346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-23 00:52:42.717358 | orchestrator | 2025-05-23 00:52:42.717369 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-05-23 00:52:42.717380 | orchestrator | Friday 23 May 2025 00:50:20 +0000 (0:00:04.870) 0:04:46.763 ************ 2025-05-23 00:52:42.717392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-23 00:52:42.717407 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.717419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-23 00:52:42.717431 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.717442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-23 00:52:42.717453 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.717464 | orchestrator | 2025-05-23 00:52:42.717475 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-05-23 00:52:42.717486 | orchestrator | Friday 23 May 2025 00:50:22 +0000 (0:00:01.584) 0:04:48.348 ************ 2025-05-23 00:52:42.717497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-23 00:52:42.717509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-23 00:52:42.717520 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.717574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-23 00:52:42.717643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-23 00:52:42.717685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-23 00:52:42.717710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-23 00:52:42.717728 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.717746 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.717764 | orchestrator | 2025-05-23 00:52:42.717783 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-23 00:52:42.717802 | orchestrator | Friday 23 May 2025 00:50:24 +0000 (0:00:02.386) 0:04:50.734 ************ 2025-05-23 00:52:42.717813 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:52:42.717824 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:52:42.717835 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:52:42.717845 | orchestrator | 2025-05-23 00:52:42.717856 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-23 00:52:42.717866 | orchestrator | Friday 23 May 2025 00:50:27 +0000 (0:00:02.994) 0:04:53.729 ************ 2025-05-23 00:52:42.717877 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:52:42.717888 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:52:42.717898 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:52:42.717909 | orchestrator | 2025-05-23 00:52:42.717919 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-05-23 00:52:42.717930 | orchestrator | Friday 23 May 2025 00:50:31 +0000 (0:00:03.616) 0:04:57.346 ************ 2025-05-23 00:52:42.717941 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-2, testbed-node-1 => (item=nova-spicehtml5proxy) 2025-05-23 00:52:42.717952 | orchestrator | 2025-05-23 00:52:42.717963 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-05-23 00:52:42.717973 | orchestrator | Friday 23 May 2025 00:50:32 +0000 (0:00:01.322) 0:04:58.669 ************ 2025-05-23 00:52:42.717990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-23 00:52:42.718002 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.718014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-23 00:52:42.718058 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.718070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-23 00:52:42.718089 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.718100 | orchestrator | 2025-05-23 00:52:42.718111 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-05-23 00:52:42.718122 | orchestrator | Friday 23 May 2025 00:50:34 +0000 (0:00:01.737) 0:05:00.406 ************ 2025-05-23 00:52:42.718176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-23 00:52:42.718189 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.718201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-23 00:52:42.718212 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.718223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-23 00:52:42.718234 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.718245 | orchestrator | 2025-05-23 00:52:42.718256 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-05-23 00:52:42.718267 | orchestrator | Friday 23 May 2025 00:50:36 +0000 (0:00:02.012) 0:05:02.419 ************ 2025-05-23 00:52:42.718277 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.718288 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.718299 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.718310 | orchestrator | 2025-05-23 00:52:42.718320 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-23 00:52:42.718331 | orchestrator | Friday 23 May 2025 00:50:38 +0000 (0:00:01.879) 0:05:04.298 ************ 2025-05-23 00:52:42.718342 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:52:42.718353 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:52:42.718364 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:52:42.718375 | orchestrator | 2025-05-23 00:52:42.718386 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-23 00:52:42.718402 | orchestrator | Friday 23 May 2025 00:50:41 +0000 (0:00:02.922) 0:05:07.221 ************ 2025-05-23 00:52:42.718413 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:52:42.718424 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:52:42.718434 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:52:42.718445 | orchestrator | 2025-05-23 00:52:42.718455 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-05-23 00:52:42.718466 | orchestrator | Friday 23 May 2025 00:50:44 +0000 (0:00:03.703) 0:05:10.924 ************ 2025-05-23 00:52:42.718477 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-1, testbed-node-0, testbed-node-2 => (item=nova-serialproxy) 2025-05-23 00:52:42.718489 | orchestrator | 2025-05-23 00:52:42.718506 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-05-23 00:52:42.718517 | orchestrator | Friday 23 May 2025 00:50:46 +0000 (0:00:01.391) 0:05:12.316 ************ 2025-05-23 00:52:42.718599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-23 00:52:42.718615 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.718626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-23 00:52:42.718638 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.718686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-23 00:52:42.718700 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.718710 | orchestrator | 2025-05-23 00:52:42.718721 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-05-23 00:52:42.718732 | orchestrator | Friday 23 May 2025 00:50:47 +0000 (0:00:01.632) 0:05:13.949 ************ 2025-05-23 00:52:42.718744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-23 00:52:42.718755 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.718766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-23 00:52:42.718777 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.718794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-23 00:52:42.718813 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.718824 | orchestrator | 2025-05-23 00:52:42.718835 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-05-23 00:52:42.718846 | orchestrator | Friday 23 May 2025 00:50:49 +0000 (0:00:01.887) 0:05:15.836 ************ 2025-05-23 00:52:42.718856 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.718867 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.718878 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.718894 | orchestrator | 2025-05-23 00:52:42.718912 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-23 00:52:42.718930 | orchestrator | Friday 23 May 2025 00:50:51 +0000 (0:00:02.049) 0:05:17.885 ************ 2025-05-23 00:52:42.718949 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:52:42.718969 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:52:42.718987 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:52:42.719001 | orchestrator | 2025-05-23 00:52:42.719011 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-23 00:52:42.719020 | orchestrator | Friday 23 May 2025 00:50:54 +0000 (0:00:03.096) 0:05:20.981 ************ 2025-05-23 00:52:42.719030 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:52:42.719039 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:52:42.719049 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:52:42.719058 | orchestrator | 2025-05-23 00:52:42.719068 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-05-23 00:52:42.719077 | orchestrator | Friday 23 May 2025 00:50:58 +0000 (0:00:03.286) 0:05:24.268 ************ 2025-05-23 00:52:42.719087 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:52:42.719097 | orchestrator | 2025-05-23 00:52:42.719107 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-05-23 00:52:42.719117 | orchestrator | Friday 23 May 2025 00:50:59 +0000 (0:00:01.851) 0:05:26.120 ************ 2025-05-23 00:52:42.719160 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-23 00:52:42.719173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-23 00:52:42.719184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-23 00:52:42.719207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-23 00:52:42.719218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.719243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-23 00:52:42.719254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-23 00:52:42.719305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-23 00:52:42.719318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-23 00:52:42.719339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-23 00:52:42.719349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.719360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-23 00:52:42.719370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-23 00:52:42.719407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-23 00:52:42.719419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.719438 | orchestrator | 2025-05-23 00:52:42.719448 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-05-23 00:52:42.719458 | orchestrator | Friday 23 May 2025 00:51:04 +0000 (0:00:04.422) 0:05:30.542 ************ 2025-05-23 00:52:42.719472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-23 00:52:42.719482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-23 00:52:42.719492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-23 00:52:42.719550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-23 00:52:42.719564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-23 00:52:42.719580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-23 00:52:42.719591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.719601 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.719615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-23 00:52:42.719625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-23 00:52:42.719640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.719657 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.719716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-23 00:52:42.719746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-23 00:52:42.719766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-23 00:52:42.719784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-23 00:52:42.719800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-23 00:52:42.719811 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.719821 | orchestrator | 2025-05-23 00:52:42.719830 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-05-23 00:52:42.719840 | orchestrator | Friday 23 May 2025 00:51:05 +0000 (0:00:00.877) 0:05:31.420 ************ 2025-05-23 00:52:42.719850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-23 00:52:42.719860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-23 00:52:42.719870 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.719947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-23 00:52:42.720004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-23 00:52:42.720022 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.720033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-23 00:52:42.720043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-23 00:52:42.720053 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.720062 | orchestrator | 2025-05-23 00:52:42.720072 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-05-23 00:52:42.720082 | orchestrator | Friday 23 May 2025 00:51:06 +0000 (0:00:01.071) 0:05:32.492 ************ 2025-05-23 00:52:42.720092 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:52:42.720101 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:52:42.720111 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:52:42.720121 | orchestrator | 2025-05-23 00:52:42.720130 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-05-23 00:52:42.720140 | orchestrator | Friday 23 May 2025 00:51:07 +0000 (0:00:01.439) 0:05:33.931 ************ 2025-05-23 00:52:42.720149 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:52:42.720159 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:52:42.720168 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:52:42.720178 | orchestrator | 2025-05-23 00:52:42.720188 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-05-23 00:52:42.720197 | orchestrator | Friday 23 May 2025 00:51:10 +0000 (0:00:02.363) 0:05:36.294 ************ 2025-05-23 00:52:42.720207 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:52:42.720217 | orchestrator | 2025-05-23 00:52:42.720226 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-05-23 00:52:42.720236 | orchestrator | Friday 23 May 2025 00:51:11 +0000 (0:00:01.491) 0:05:37.786 ************ 2025-05-23 00:52:42.720250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-23 00:52:42.720262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-23 00:52:42.720312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-23 00:52:42.720347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-23 00:52:42.720373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-23 00:52:42.720392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-23 00:52:42.720419 | orchestrator | 2025-05-23 00:52:42.720436 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-05-23 00:52:42.720454 | orchestrator | Friday 23 May 2025 00:51:18 +0000 (0:00:06.757) 0:05:44.543 ************ 2025-05-23 00:52:42.720511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-23 00:52:42.720525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-23 00:52:42.720564 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.720580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-23 00:52:42.720591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-23 00:52:42.720608 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.720618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-23 00:52:42.720660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-23 00:52:42.720673 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.720682 | orchestrator | 2025-05-23 00:52:42.720692 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-05-23 00:52:42.720702 | orchestrator | Friday 23 May 2025 00:51:19 +0000 (0:00:01.080) 0:05:45.624 ************ 2025-05-23 00:52:42.720712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-23 00:52:42.720722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-23 00:52:42.720733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-23 00:52:42.720747 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.720757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-23 00:52:42.720773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-23 00:52:42.720798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-23 00:52:42.720830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-23 00:52:42.720846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-23 00:52:42.720864 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.720882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-23 00:52:42.720900 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.720919 | orchestrator | 2025-05-23 00:52:42.720937 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-05-23 00:52:42.720951 | orchestrator | Friday 23 May 2025 00:51:20 +0000 (0:00:01.331) 0:05:46.956 ************ 2025-05-23 00:52:42.720961 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.720971 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.720980 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.720990 | orchestrator | 2025-05-23 00:52:42.720999 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-05-23 00:52:42.721009 | orchestrator | Friday 23 May 2025 00:51:21 +0000 (0:00:00.729) 0:05:47.686 ************ 2025-05-23 00:52:42.721018 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.721028 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.721038 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.721047 | orchestrator | 2025-05-23 00:52:42.721094 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-05-23 00:52:42.721105 | orchestrator | Friday 23 May 2025 00:51:23 +0000 (0:00:01.891) 0:05:49.577 ************ 2025-05-23 00:52:42.721115 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:52:42.721124 | orchestrator | 2025-05-23 00:52:42.721134 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-05-23 00:52:42.721144 | orchestrator | Friday 23 May 2025 00:51:25 +0000 (0:00:01.906) 0:05:51.483 ************ 2025-05-23 00:52:42.721154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-23 00:52:42.721165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-23 00:52:42.721181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-23 00:52:42.721198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:52:42.721209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:52:42.721219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-23 00:52:42.721256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-23 00:52:42.721268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:52:42.721278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:52:42.721293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-23 00:52:42.721308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-23 00:52:42.721319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-23 00:52:42.721329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:52:42.721367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:52:42.721378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-23 00:52:42.721389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-23 00:52:42.721414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-23 00:52:42.721425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-23 00:52:42.721465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-23 00:52:42.721477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:52:42.721487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:52:42.721507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:52:42.721517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:52:42.721551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-23 00:52:42.721564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-23 00:52:42.721579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:52:42.721590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:52:42.721600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-23 00:52:42.721622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-23 00:52:42.721632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:52:42.721643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:52:42.721660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-23 00:52:42.721671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:52:42.721680 | orchestrator | 2025-05-23 00:52:42.721690 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-05-23 00:52:42.721700 | orchestrator | Friday 23 May 2025 00:51:30 +0000 (0:00:04.886) 0:05:56.369 ************ 2025-05-23 00:52:42.721710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-23 00:52:42.721729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-23 00:52:42.721740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:52:42.721750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:52:42.721760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-23 00:52:42.721776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-23 00:52:42.721788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-23 00:52:42.721807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:52:42.721818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:52:42.721828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-23 00:52:42.721839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:52:42.721848 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.721864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-23 00:52:42.721875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-23 00:52:42.721891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:52:42.721901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:52:42.721915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-23 00:52:42.721926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-23 00:52:42.721936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-23 00:52:42.721952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-23 00:52:42.721968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:52:42.721982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-23 00:52:42.721992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:52:42.722003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:52:42.722013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-23 00:52:42.722056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:52:42.722073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-23 00:52:42.722084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-23 00:52:42.722101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-23 00:52:42.722112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:52:42.722122 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.722132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:52:42.722147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:52:42.722162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-23 00:52:42.722173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 00:52:42.722183 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.722192 | orchestrator | 2025-05-23 00:52:42.722202 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-05-23 00:52:42.722211 | orchestrator | Friday 23 May 2025 00:51:31 +0000 (0:00:01.226) 0:05:57.595 ************ 2025-05-23 00:52:42.722221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-23 00:52:42.722232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-23 00:52:42.722246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-23 00:52:42.722258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-23 00:52:42.722268 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.722277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-23 00:52:42.722287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-23 00:52:42.722297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-23 00:52:42.722307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-23 00:52:42.722317 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.722332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-23 00:52:42.722342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-23 00:52:42.722357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-23 00:52:42.722367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-23 00:52:42.722377 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.722387 | orchestrator | 2025-05-23 00:52:42.722397 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-05-23 00:52:42.722406 | orchestrator | Friday 23 May 2025 00:51:33 +0000 (0:00:01.813) 0:05:59.409 ************ 2025-05-23 00:52:42.722416 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.722426 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.722435 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.722445 | orchestrator | 2025-05-23 00:52:42.722454 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-05-23 00:52:42.722464 | orchestrator | Friday 23 May 2025 00:51:34 +0000 (0:00:00.956) 0:06:00.366 ************ 2025-05-23 00:52:42.722473 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.722483 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.722493 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.722502 | orchestrator | 2025-05-23 00:52:42.722512 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-05-23 00:52:42.722522 | orchestrator | Friday 23 May 2025 00:51:35 +0000 (0:00:01.679) 0:06:02.045 ************ 2025-05-23 00:52:42.722585 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:52:42.722596 | orchestrator | 2025-05-23 00:52:42.722606 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-05-23 00:52:42.722616 | orchestrator | Friday 23 May 2025 00:51:37 +0000 (0:00:01.555) 0:06:03.600 ************ 2025-05-23 00:52:42.722632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-23 00:52:42.722652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-23 00:52:42.722690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-23 00:52:42.722710 | orchestrator | 2025-05-23 00:52:42.722727 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-05-23 00:52:42.722744 | orchestrator | Friday 23 May 2025 00:51:40 +0000 (0:00:02.887) 0:06:06.488 ************ 2025-05-23 00:52:42.722759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-23 00:52:42.722781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-23 00:52:42.722790 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.722798 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.722814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-23 00:52:42.722823 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.722831 | orchestrator | 2025-05-23 00:52:42.722839 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-05-23 00:52:42.722847 | orchestrator | Friday 23 May 2025 00:51:40 +0000 (0:00:00.696) 0:06:07.185 ************ 2025-05-23 00:52:42.722855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-23 00:52:42.722863 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.722876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-23 00:52:42.722884 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.722892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-23 00:52:42.722900 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.722908 | orchestrator | 2025-05-23 00:52:42.722916 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-05-23 00:52:42.722923 | orchestrator | Friday 23 May 2025 00:51:41 +0000 (0:00:00.813) 0:06:07.999 ************ 2025-05-23 00:52:42.722932 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.722939 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.722947 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.722955 | orchestrator | 2025-05-23 00:52:42.722963 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-05-23 00:52:42.722971 | orchestrator | Friday 23 May 2025 00:51:42 +0000 (0:00:00.716) 0:06:08.716 ************ 2025-05-23 00:52:42.722979 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.722987 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.722995 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.723002 | orchestrator | 2025-05-23 00:52:42.723010 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-05-23 00:52:42.723018 | orchestrator | Friday 23 May 2025 00:51:44 +0000 (0:00:01.780) 0:06:10.497 ************ 2025-05-23 00:52:42.723026 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:52:42.723034 | orchestrator | 2025-05-23 00:52:42.723042 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-05-23 00:52:42.723049 | orchestrator | Friday 23 May 2025 00:51:46 +0000 (0:00:01.954) 0:06:12.452 ************ 2025-05-23 00:52:42.723061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-23 00:52:42.723075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-23 00:52:42.723088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-23 00:52:42.723097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-23 00:52:42.723105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-23 00:52:42.723121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-23 00:52:42.723129 | orchestrator | 2025-05-23 00:52:42.723137 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-05-23 00:52:42.723145 | orchestrator | Friday 23 May 2025 00:51:53 +0000 (0:00:07.621) 0:06:20.073 ************ 2025-05-23 00:52:42.723153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-23 00:52:42.723166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-23 00:52:42.723174 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.723183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-23 00:52:42.723199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-23 00:52:42.723207 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.723215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-23 00:52:42.723228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-23 00:52:42.723236 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.723244 | orchestrator | 2025-05-23 00:52:42.723252 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-05-23 00:52:42.723260 | orchestrator | Friday 23 May 2025 00:51:54 +0000 (0:00:00.959) 0:06:21.032 ************ 2025-05-23 00:52:42.723268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-23 00:52:42.723277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-23 00:52:42.723289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-23 00:52:42.723297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-23 00:52:42.723305 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.723313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-23 00:52:42.723327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-23 00:52:42.723336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-23 00:52:42.723344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-23 00:52:42.723352 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.723360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-23 00:52:42.723368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-23 00:52:42.723376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-23 00:52:42.723384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-23 00:52:42.723392 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.723400 | orchestrator | 2025-05-23 00:52:42.723408 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-05-23 00:52:42.723416 | orchestrator | Friday 23 May 2025 00:51:56 +0000 (0:00:01.713) 0:06:22.746 ************ 2025-05-23 00:52:42.723424 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:52:42.723432 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:52:42.723439 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:52:42.723447 | orchestrator | 2025-05-23 00:52:42.723455 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-05-23 00:52:42.723463 | orchestrator | Friday 23 May 2025 00:51:57 +0000 (0:00:01.445) 0:06:24.191 ************ 2025-05-23 00:52:42.723471 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:52:42.723478 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:52:42.723486 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:52:42.723494 | orchestrator | 2025-05-23 00:52:42.723506 | orchestrator | TASK [include_role : swift] **************************************************** 2025-05-23 00:52:42.723514 | orchestrator | Friday 23 May 2025 00:52:00 +0000 (0:00:02.480) 0:06:26.672 ************ 2025-05-23 00:52:42.723522 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.723548 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.723557 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.723565 | orchestrator | 2025-05-23 00:52:42.723572 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-05-23 00:52:42.723580 | orchestrator | Friday 23 May 2025 00:52:00 +0000 (0:00:00.317) 0:06:26.990 ************ 2025-05-23 00:52:42.723588 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.723596 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.723604 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.723612 | orchestrator | 2025-05-23 00:52:42.723619 | orchestrator | TASK [include_role : trove] **************************************************** 2025-05-23 00:52:42.723627 | orchestrator | Friday 23 May 2025 00:52:01 +0000 (0:00:00.557) 0:06:27.547 ************ 2025-05-23 00:52:42.723635 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.723643 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.723651 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.723658 | orchestrator | 2025-05-23 00:52:42.723666 | orchestrator | TASK [include_role : venus] **************************************************** 2025-05-23 00:52:42.723674 | orchestrator | Friday 23 May 2025 00:52:01 +0000 (0:00:00.565) 0:06:28.112 ************ 2025-05-23 00:52:42.723682 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.723690 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.723698 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.723706 | orchestrator | 2025-05-23 00:52:42.723713 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-05-23 00:52:42.723721 | orchestrator | Friday 23 May 2025 00:52:02 +0000 (0:00:00.303) 0:06:28.415 ************ 2025-05-23 00:52:42.723729 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.723737 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.723745 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.723752 | orchestrator | 2025-05-23 00:52:42.723760 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-05-23 00:52:42.723768 | orchestrator | Friday 23 May 2025 00:52:02 +0000 (0:00:00.567) 0:06:28.982 ************ 2025-05-23 00:52:42.723776 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.723784 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.723791 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.723799 | orchestrator | 2025-05-23 00:52:42.723807 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-05-23 00:52:42.723815 | orchestrator | Friday 23 May 2025 00:52:03 +0000 (0:00:01.001) 0:06:29.984 ************ 2025-05-23 00:52:42.723823 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:52:42.723831 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:52:42.723838 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:52:42.723846 | orchestrator | 2025-05-23 00:52:42.723854 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-05-23 00:52:42.723866 | orchestrator | Friday 23 May 2025 00:52:04 +0000 (0:00:00.681) 0:06:30.666 ************ 2025-05-23 00:52:42.723874 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:52:42.723882 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:52:42.723889 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:52:42.723897 | orchestrator | 2025-05-23 00:52:42.723905 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-05-23 00:52:42.723913 | orchestrator | Friday 23 May 2025 00:52:05 +0000 (0:00:00.612) 0:06:31.278 ************ 2025-05-23 00:52:42.723920 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:52:42.723928 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:52:42.723936 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:52:42.723944 | orchestrator | 2025-05-23 00:52:42.723951 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-05-23 00:52:42.723959 | orchestrator | Friday 23 May 2025 00:52:06 +0000 (0:00:01.248) 0:06:32.527 ************ 2025-05-23 00:52:42.723967 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:52:42.723975 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:52:42.723983 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:52:42.723994 | orchestrator | 2025-05-23 00:52:42.724002 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-05-23 00:52:42.724010 | orchestrator | Friday 23 May 2025 00:52:07 +0000 (0:00:01.202) 0:06:33.729 ************ 2025-05-23 00:52:42.724018 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:52:42.724026 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:52:42.724033 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:52:42.724041 | orchestrator | 2025-05-23 00:52:42.724049 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-05-23 00:52:42.724057 | orchestrator | Friday 23 May 2025 00:52:08 +0000 (0:00:00.953) 0:06:34.682 ************ 2025-05-23 00:52:42.724065 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:52:42.724073 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:52:42.724080 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:52:42.724088 | orchestrator | 2025-05-23 00:52:42.724096 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-05-23 00:52:42.724104 | orchestrator | Friday 23 May 2025 00:52:13 +0000 (0:00:05.106) 0:06:39.788 ************ 2025-05-23 00:52:42.724112 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:52:42.724120 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:52:42.724127 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:52:42.724135 | orchestrator | 2025-05-23 00:52:42.724143 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-05-23 00:52:42.724151 | orchestrator | Friday 23 May 2025 00:52:16 +0000 (0:00:03.131) 0:06:42.920 ************ 2025-05-23 00:52:42.724159 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:52:42.724167 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:52:42.724175 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:52:42.724182 | orchestrator | 2025-05-23 00:52:42.724190 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-05-23 00:52:42.724198 | orchestrator | Friday 23 May 2025 00:52:22 +0000 (0:00:06.246) 0:06:49.166 ************ 2025-05-23 00:52:42.724206 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:52:42.724214 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:52:42.724222 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:52:42.724229 | orchestrator | 2025-05-23 00:52:42.724237 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-05-23 00:52:42.724249 | orchestrator | Friday 23 May 2025 00:52:26 +0000 (0:00:03.706) 0:06:52.873 ************ 2025-05-23 00:52:42.724257 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:52:42.724265 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:52:42.724273 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:52:42.724281 | orchestrator | 2025-05-23 00:52:42.724289 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-05-23 00:52:42.724296 | orchestrator | Friday 23 May 2025 00:52:35 +0000 (0:00:08.571) 0:07:01.444 ************ 2025-05-23 00:52:42.724304 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.724312 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.724320 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.724339 | orchestrator | 2025-05-23 00:52:42.724347 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-05-23 00:52:42.724355 | orchestrator | Friday 23 May 2025 00:52:35 +0000 (0:00:00.746) 0:07:02.191 ************ 2025-05-23 00:52:42.724363 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.724371 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.724379 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.724386 | orchestrator | 2025-05-23 00:52:42.724394 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-05-23 00:52:42.724402 | orchestrator | Friday 23 May 2025 00:52:36 +0000 (0:00:00.407) 0:07:02.599 ************ 2025-05-23 00:52:42.724410 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.724418 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.724425 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.724433 | orchestrator | 2025-05-23 00:52:42.724441 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-05-23 00:52:42.724453 | orchestrator | Friday 23 May 2025 00:52:37 +0000 (0:00:00.786) 0:07:03.385 ************ 2025-05-23 00:52:42.724461 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.724469 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.724477 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.724484 | orchestrator | 2025-05-23 00:52:42.724492 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-05-23 00:52:42.724500 | orchestrator | Friday 23 May 2025 00:52:38 +0000 (0:00:00.817) 0:07:04.202 ************ 2025-05-23 00:52:42.724508 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.724515 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.724523 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.724544 | orchestrator | 2025-05-23 00:52:42.724552 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-05-23 00:52:42.724560 | orchestrator | Friday 23 May 2025 00:52:38 +0000 (0:00:00.671) 0:07:04.874 ************ 2025-05-23 00:52:42.724568 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:52:42.724575 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:52:42.724583 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:52:42.724591 | orchestrator | 2025-05-23 00:52:42.724599 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-05-23 00:52:42.724610 | orchestrator | Friday 23 May 2025 00:52:39 +0000 (0:00:00.346) 0:07:05.221 ************ 2025-05-23 00:52:42.724619 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:52:42.724626 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:52:42.724634 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:52:42.724642 | orchestrator | 2025-05-23 00:52:42.724650 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-05-23 00:52:42.724658 | orchestrator | Friday 23 May 2025 00:52:40 +0000 (0:00:01.182) 0:07:06.404 ************ 2025-05-23 00:52:42.724666 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:52:42.724674 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:52:42.724681 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:52:42.724689 | orchestrator | 2025-05-23 00:52:42.724697 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 00:52:42.724705 | orchestrator | testbed-node-0 : ok=127  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-05-23 00:52:42.724713 | orchestrator | testbed-node-1 : ok=126  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-05-23 00:52:42.724721 | orchestrator | testbed-node-2 : ok=126  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-05-23 00:52:42.724730 | orchestrator | 2025-05-23 00:52:42.724738 | orchestrator | 2025-05-23 00:52:42.724745 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-23 00:52:42.724753 | orchestrator | Friday 23 May 2025 00:52:41 +0000 (0:00:01.120) 0:07:07.524 ************ 2025-05-23 00:52:42.724761 | orchestrator | =============================================================================== 2025-05-23 00:52:42.724769 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 8.57s 2025-05-23 00:52:42.724777 | orchestrator | haproxy-config : Copying over heat haproxy config ----------------------- 8.13s 2025-05-23 00:52:42.724785 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 7.62s 2025-05-23 00:52:42.724793 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 6.76s 2025-05-23 00:52:42.724801 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 6.25s 2025-05-23 00:52:42.724808 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 5.37s 2025-05-23 00:52:42.724816 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 5.32s 2025-05-23 00:52:42.724824 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 5.21s 2025-05-23 00:52:42.724839 | orchestrator | haproxy-config : Add configuration for glance when using single external frontend --- 5.17s 2025-05-23 00:52:42.724847 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 5.13s 2025-05-23 00:52:42.724855 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 5.11s 2025-05-23 00:52:42.724867 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.05s 2025-05-23 00:52:42.724875 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.92s 2025-05-23 00:52:42.724883 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.89s 2025-05-23 00:52:42.724891 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.87s 2025-05-23 00:52:42.724899 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.80s 2025-05-23 00:52:42.724907 | orchestrator | haproxy-config : Copying over ceph-rgw haproxy config ------------------- 4.75s 2025-05-23 00:52:42.724914 | orchestrator | haproxy-config : Copying over grafana haproxy config -------------------- 4.73s 2025-05-23 00:52:42.724922 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 4.64s 2025-05-23 00:52:42.724930 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 4.48s 2025-05-23 00:52:42.724938 | orchestrator | 2025-05-23 00:52:42 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:52:45.724299 | orchestrator | 2025-05-23 00:52:45 | INFO  | Task fd9628d3-8872-4568-9ec9-8e34a798a6cb is in state STARTED 2025-05-23 00:52:45.724636 | orchestrator | 2025-05-23 00:52:45 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:52:45.725271 | orchestrator | 2025-05-23 00:52:45 | INFO  | Task be80a8b9-2033-43a5-8b6c-fddd93dc4a1b is in state STARTED 2025-05-23 00:52:45.735223 | orchestrator | 2025-05-23 00:52:45 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:52:45.735277 | orchestrator | 2025-05-23 00:52:45 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:52:45.735289 | orchestrator | 2025-05-23 00:52:45 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:52:48.765093 | orchestrator | 2025-05-23 00:52:48 | INFO  | Task fd9628d3-8872-4568-9ec9-8e34a798a6cb is in state SUCCESS 2025-05-23 00:52:48.765715 | orchestrator | 2025-05-23 00:52:48 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:52:48.766369 | orchestrator | 2025-05-23 00:52:48 | INFO  | Task be80a8b9-2033-43a5-8b6c-fddd93dc4a1b is in state STARTED 2025-05-23 00:52:48.767931 | orchestrator | 2025-05-23 00:52:48 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:52:48.768780 | orchestrator | 2025-05-23 00:52:48 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:52:48.768811 | orchestrator | 2025-05-23 00:52:48 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:52:51.803616 | orchestrator | 2025-05-23 00:52:51 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:52:51.806614 | orchestrator | 2025-05-23 00:52:51 | INFO  | Task be80a8b9-2033-43a5-8b6c-fddd93dc4a1b is in state STARTED 2025-05-23 00:52:51.806963 | orchestrator | 2025-05-23 00:52:51 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:52:51.807846 | orchestrator | 2025-05-23 00:52:51 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:52:51.807876 | orchestrator | 2025-05-23 00:52:51 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:52:54.851473 | orchestrator | 2025-05-23 00:52:54 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:52:54.851776 | orchestrator | 2025-05-23 00:52:54 | INFO  | Task be80a8b9-2033-43a5-8b6c-fddd93dc4a1b is in state STARTED 2025-05-23 00:52:54.853111 | orchestrator | 2025-05-23 00:52:54 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:52:54.853867 | orchestrator | 2025-05-23 00:52:54 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:52:54.853894 | orchestrator | 2025-05-23 00:52:54 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:52:57.908279 | orchestrator | 2025-05-23 00:52:57 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:52:57.908771 | orchestrator | 2025-05-23 00:52:57 | INFO  | Task be80a8b9-2033-43a5-8b6c-fddd93dc4a1b is in state STARTED 2025-05-23 00:52:57.912409 | orchestrator | 2025-05-23 00:52:57 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:52:57.913776 | orchestrator | 2025-05-23 00:52:57 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:52:57.913810 | orchestrator | 2025-05-23 00:52:57 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:53:00.954928 | orchestrator | 2025-05-23 00:53:00 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:53:00.955218 | orchestrator | 2025-05-23 00:53:00 | INFO  | Task be80a8b9-2033-43a5-8b6c-fddd93dc4a1b is in state STARTED 2025-05-23 00:53:00.956511 | orchestrator | 2025-05-23 00:53:00 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:53:00.959073 | orchestrator | 2025-05-23 00:53:00 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:53:00.959102 | orchestrator | 2025-05-23 00:53:00 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:53:03.990680 | orchestrator | 2025-05-23 00:53:03 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:53:03.991830 | orchestrator | 2025-05-23 00:53:03 | INFO  | Task be80a8b9-2033-43a5-8b6c-fddd93dc4a1b is in state STARTED 2025-05-23 00:53:03.992984 | orchestrator | 2025-05-23 00:53:03 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:53:03.994275 | orchestrator | 2025-05-23 00:53:03 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:53:03.994305 | orchestrator | 2025-05-23 00:53:03 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:53:07.035589 | orchestrator | 2025-05-23 00:53:07 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:53:07.038061 | orchestrator | 2025-05-23 00:53:07 | INFO  | Task be80a8b9-2033-43a5-8b6c-fddd93dc4a1b is in state STARTED 2025-05-23 00:53:07.039568 | orchestrator | 2025-05-23 00:53:07 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:53:07.039750 | orchestrator | 2025-05-23 00:53:07 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:53:07.040732 | orchestrator | 2025-05-23 00:53:07 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:53:10.079294 | orchestrator | 2025-05-23 00:53:10 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:53:10.079843 | orchestrator | 2025-05-23 00:53:10 | INFO  | Task be80a8b9-2033-43a5-8b6c-fddd93dc4a1b is in state STARTED 2025-05-23 00:53:10.080321 | orchestrator | 2025-05-23 00:53:10 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:53:10.080634 | orchestrator | 2025-05-23 00:53:10 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:53:10.080660 | orchestrator | 2025-05-23 00:53:10 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:53:13.129897 | orchestrator | 2025-05-23 00:53:13 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:53:13.130660 | orchestrator | 2025-05-23 00:53:13 | INFO  | Task be80a8b9-2033-43a5-8b6c-fddd93dc4a1b is in state STARTED 2025-05-23 00:53:13.131065 | orchestrator | 2025-05-23 00:53:13 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:53:13.132173 | orchestrator | 2025-05-23 00:53:13 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:53:13.132194 | orchestrator | 2025-05-23 00:53:13 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:53:16.173854 | orchestrator | 2025-05-23 00:53:16 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:53:16.174321 | orchestrator | 2025-05-23 00:53:16 | INFO  | Task be80a8b9-2033-43a5-8b6c-fddd93dc4a1b is in state STARTED 2025-05-23 00:53:16.176153 | orchestrator | 2025-05-23 00:53:16 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:53:16.177359 | orchestrator | 2025-05-23 00:53:16 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:53:16.177404 | orchestrator | 2025-05-23 00:53:16 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:53:19.221713 | orchestrator | 2025-05-23 00:53:19 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:53:19.222117 | orchestrator | 2025-05-23 00:53:19 | INFO  | Task be80a8b9-2033-43a5-8b6c-fddd93dc4a1b is in state STARTED 2025-05-23 00:53:19.223729 | orchestrator | 2025-05-23 00:53:19 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:53:19.224443 | orchestrator | 2025-05-23 00:53:19 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:53:19.224472 | orchestrator | 2025-05-23 00:53:19 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:53:22.265322 | orchestrator | 2025-05-23 00:53:22 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:53:22.267318 | orchestrator | 2025-05-23 00:53:22 | INFO  | Task be80a8b9-2033-43a5-8b6c-fddd93dc4a1b is in state STARTED 2025-05-23 00:53:22.267352 | orchestrator | 2025-05-23 00:53:22 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:53:22.268762 | orchestrator | 2025-05-23 00:53:22 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:53:22.268785 | orchestrator | 2025-05-23 00:53:22 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:53:25.333769 | orchestrator | 2025-05-23 00:53:25 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:53:25.335187 | orchestrator | 2025-05-23 00:53:25 | INFO  | Task be80a8b9-2033-43a5-8b6c-fddd93dc4a1b is in state STARTED 2025-05-23 00:53:25.335236 | orchestrator | 2025-05-23 00:53:25 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:53:25.340290 | orchestrator | 2025-05-23 00:53:25 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:53:25.340339 | orchestrator | 2025-05-23 00:53:25 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:53:28.400633 | orchestrator | 2025-05-23 00:53:28 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:53:28.400870 | orchestrator | 2025-05-23 00:53:28 | INFO  | Task be80a8b9-2033-43a5-8b6c-fddd93dc4a1b is in state STARTED 2025-05-23 00:53:28.403568 | orchestrator | 2025-05-23 00:53:28 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:53:28.405640 | orchestrator | 2025-05-23 00:53:28 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:53:28.405678 | orchestrator | 2025-05-23 00:53:28 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:53:31.459329 | orchestrator | 2025-05-23 00:53:31 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:53:31.461457 | orchestrator | 2025-05-23 00:53:31 | INFO  | Task be80a8b9-2033-43a5-8b6c-fddd93dc4a1b is in state STARTED 2025-05-23 00:53:31.463749 | orchestrator | 2025-05-23 00:53:31 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:53:31.465917 | orchestrator | 2025-05-23 00:53:31 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:53:31.466128 | orchestrator | 2025-05-23 00:53:31 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:53:34.523453 | orchestrator | 2025-05-23 00:53:34 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:53:34.524133 | orchestrator | 2025-05-23 00:53:34 | INFO  | Task be80a8b9-2033-43a5-8b6c-fddd93dc4a1b is in state STARTED 2025-05-23 00:53:34.525646 | orchestrator | 2025-05-23 00:53:34 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:53:34.529773 | orchestrator | 2025-05-23 00:53:34 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:53:34.529810 | orchestrator | 2025-05-23 00:53:34 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:53:37.594402 | orchestrator | 2025-05-23 00:53:37 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:53:37.594537 | orchestrator | 2025-05-23 00:53:37 | INFO  | Task be80a8b9-2033-43a5-8b6c-fddd93dc4a1b is in state STARTED 2025-05-23 00:53:37.595583 | orchestrator | 2025-05-23 00:53:37 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:53:37.596864 | orchestrator | 2025-05-23 00:53:37 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:53:37.596887 | orchestrator | 2025-05-23 00:53:37 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:53:40.648403 | orchestrator | 2025-05-23 00:53:40 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:53:40.649306 | orchestrator | 2025-05-23 00:53:40 | INFO  | Task be80a8b9-2033-43a5-8b6c-fddd93dc4a1b is in state STARTED 2025-05-23 00:53:40.652662 | orchestrator | 2025-05-23 00:53:40 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:53:40.653472 | orchestrator | 2025-05-23 00:53:40 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:53:40.653529 | orchestrator | 2025-05-23 00:53:40 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:53:43.703863 | orchestrator | 2025-05-23 00:53:43 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:53:43.704076 | orchestrator | 2025-05-23 00:53:43 | INFO  | Task be80a8b9-2033-43a5-8b6c-fddd93dc4a1b is in state STARTED 2025-05-23 00:53:43.706932 | orchestrator | 2025-05-23 00:53:43 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:53:43.710362 | orchestrator | 2025-05-23 00:53:43 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:53:43.710420 | orchestrator | 2025-05-23 00:53:43 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:53:46.749761 | orchestrator | 2025-05-23 00:53:46 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:53:46.750456 | orchestrator | 2025-05-23 00:53:46 | INFO  | Task be80a8b9-2033-43a5-8b6c-fddd93dc4a1b is in state STARTED 2025-05-23 00:53:46.752707 | orchestrator | 2025-05-23 00:53:46 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:53:46.756753 | orchestrator | 2025-05-23 00:53:46 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:53:46.756796 | orchestrator | 2025-05-23 00:53:46 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:53:49.807908 | orchestrator | 2025-05-23 00:53:49 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:53:49.809746 | orchestrator | 2025-05-23 00:53:49 | INFO  | Task be80a8b9-2033-43a5-8b6c-fddd93dc4a1b is in state STARTED 2025-05-23 00:53:49.810566 | orchestrator | 2025-05-23 00:53:49 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:53:49.812574 | orchestrator | 2025-05-23 00:53:49 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:53:49.812624 | orchestrator | 2025-05-23 00:53:49 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:53:52.868591 | orchestrator | 2025-05-23 00:53:52 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:53:52.872537 | orchestrator | 2025-05-23 00:53:52 | INFO  | Task be80a8b9-2033-43a5-8b6c-fddd93dc4a1b is in state STARTED 2025-05-23 00:53:52.874909 | orchestrator | 2025-05-23 00:53:52 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:53:52.877745 | orchestrator | 2025-05-23 00:53:52 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:53:52.878117 | orchestrator | 2025-05-23 00:53:52 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:53:55.919716 | orchestrator | 2025-05-23 00:53:55 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:53:55.921959 | orchestrator | 2025-05-23 00:53:55 | INFO  | Task be80a8b9-2033-43a5-8b6c-fddd93dc4a1b is in state STARTED 2025-05-23 00:53:55.923252 | orchestrator | 2025-05-23 00:53:55 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:53:55.925272 | orchestrator | 2025-05-23 00:53:55 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:53:55.925307 | orchestrator | 2025-05-23 00:53:55 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:53:58.972386 | orchestrator | 2025-05-23 00:53:58 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:53:58.974907 | orchestrator | 2025-05-23 00:53:58 | INFO  | Task be80a8b9-2033-43a5-8b6c-fddd93dc4a1b is in state STARTED 2025-05-23 00:53:58.975533 | orchestrator | 2025-05-23 00:53:58 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:53:58.977491 | orchestrator | 2025-05-23 00:53:58 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:53:58.977520 | orchestrator | 2025-05-23 00:53:58 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:54:02.033215 | orchestrator | 2025-05-23 00:54:02 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:54:02.037097 | orchestrator | 2025-05-23 00:54:02 | INFO  | Task be80a8b9-2033-43a5-8b6c-fddd93dc4a1b is in state STARTED 2025-05-23 00:54:02.037145 | orchestrator | 2025-05-23 00:54:02 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:54:02.039329 | orchestrator | 2025-05-23 00:54:02 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:54:02.041471 | orchestrator | 2025-05-23 00:54:02 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:54:05.099996 | orchestrator | 2025-05-23 00:54:05 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:54:05.100641 | orchestrator | 2025-05-23 00:54:05 | INFO  | Task be80a8b9-2033-43a5-8b6c-fddd93dc4a1b is in state STARTED 2025-05-23 00:54:05.101574 | orchestrator | 2025-05-23 00:54:05 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:54:05.102069 | orchestrator | 2025-05-23 00:54:05 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:54:05.102108 | orchestrator | 2025-05-23 00:54:05 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:54:08.158120 | orchestrator | 2025-05-23 00:54:08 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:54:08.159166 | orchestrator | 2025-05-23 00:54:08 | INFO  | Task be80a8b9-2033-43a5-8b6c-fddd93dc4a1b is in state STARTED 2025-05-23 00:54:08.161412 | orchestrator | 2025-05-23 00:54:08 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:54:08.161913 | orchestrator | 2025-05-23 00:54:08 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:54:08.161945 | orchestrator | 2025-05-23 00:54:08 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:54:11.223752 | orchestrator | 2025-05-23 00:54:11 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:54:11.223846 | orchestrator | 2025-05-23 00:54:11 | INFO  | Task be80a8b9-2033-43a5-8b6c-fddd93dc4a1b is in state STARTED 2025-05-23 00:54:11.224391 | orchestrator | 2025-05-23 00:54:11 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:54:11.224973 | orchestrator | 2025-05-23 00:54:11 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:54:11.224996 | orchestrator | 2025-05-23 00:54:11 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:54:14.271714 | orchestrator | 2025-05-23 00:54:14 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:54:14.274321 | orchestrator | 2025-05-23 00:54:14 | INFO  | Task be80a8b9-2033-43a5-8b6c-fddd93dc4a1b is in state STARTED 2025-05-23 00:54:14.274358 | orchestrator | 2025-05-23 00:54:14 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:54:14.275330 | orchestrator | 2025-05-23 00:54:14 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:54:14.275489 | orchestrator | 2025-05-23 00:54:14 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:54:17.321898 | orchestrator | 2025-05-23 00:54:17 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:54:17.322968 | orchestrator | 2025-05-23 00:54:17 | INFO  | Task be80a8b9-2033-43a5-8b6c-fddd93dc4a1b is in state STARTED 2025-05-23 00:54:17.325316 | orchestrator | 2025-05-23 00:54:17 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:54:17.328144 | orchestrator | 2025-05-23 00:54:17 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:54:17.328183 | orchestrator | 2025-05-23 00:54:17 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:54:20.380530 | orchestrator | 2025-05-23 00:54:20 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:54:20.382207 | orchestrator | 2025-05-23 00:54:20 | INFO  | Task be80a8b9-2033-43a5-8b6c-fddd93dc4a1b is in state STARTED 2025-05-23 00:54:20.384339 | orchestrator | 2025-05-23 00:54:20 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:54:20.386783 | orchestrator | 2025-05-23 00:54:20 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:54:20.386810 | orchestrator | 2025-05-23 00:54:20 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:54:23.438983 | orchestrator | 2025-05-23 00:54:23 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:54:23.440301 | orchestrator | 2025-05-23 00:54:23 | INFO  | Task be80a8b9-2033-43a5-8b6c-fddd93dc4a1b is in state STARTED 2025-05-23 00:54:23.442239 | orchestrator | 2025-05-23 00:54:23 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:54:23.443375 | orchestrator | 2025-05-23 00:54:23 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:54:23.443589 | orchestrator | 2025-05-23 00:54:23 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:54:26.507569 | orchestrator | 2025-05-23 00:54:26 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:54:26.508711 | orchestrator | 2025-05-23 00:54:26 | INFO  | Task be80a8b9-2033-43a5-8b6c-fddd93dc4a1b is in state STARTED 2025-05-23 00:54:26.510714 | orchestrator | 2025-05-23 00:54:26 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:54:26.511962 | orchestrator | 2025-05-23 00:54:26 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:54:26.512240 | orchestrator | 2025-05-23 00:54:26 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:54:29.593619 | orchestrator | 2025-05-23 00:54:29 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:54:29.596031 | orchestrator | 2025-05-23 00:54:29 | INFO  | Task be80a8b9-2033-43a5-8b6c-fddd93dc4a1b is in state STARTED 2025-05-23 00:54:29.600045 | orchestrator | 2025-05-23 00:54:29 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:54:29.600089 | orchestrator | 2025-05-23 00:54:29 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:54:29.600098 | orchestrator | 2025-05-23 00:54:29 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:54:32.662972 | orchestrator | 2025-05-23 00:54:32 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:54:32.665192 | orchestrator | 2025-05-23 00:54:32 | INFO  | Task be80a8b9-2033-43a5-8b6c-fddd93dc4a1b is in state STARTED 2025-05-23 00:54:32.665941 | orchestrator | 2025-05-23 00:54:32 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:54:32.668617 | orchestrator | 2025-05-23 00:54:32 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:54:32.668650 | orchestrator | 2025-05-23 00:54:32 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:54:35.733699 | orchestrator | 2025-05-23 00:54:35 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:54:35.735418 | orchestrator | 2025-05-23 00:54:35 | INFO  | Task be80a8b9-2033-43a5-8b6c-fddd93dc4a1b is in state STARTED 2025-05-23 00:54:35.736818 | orchestrator | 2025-05-23 00:54:35 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:54:35.738980 | orchestrator | 2025-05-23 00:54:35 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:54:35.739017 | orchestrator | 2025-05-23 00:54:35 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:54:38.789939 | orchestrator | 2025-05-23 00:54:38 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:54:38.791556 | orchestrator | 2025-05-23 00:54:38 | INFO  | Task be80a8b9-2033-43a5-8b6c-fddd93dc4a1b is in state STARTED 2025-05-23 00:54:38.792968 | orchestrator | 2025-05-23 00:54:38 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:54:38.794817 | orchestrator | 2025-05-23 00:54:38 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:54:38.794845 | orchestrator | 2025-05-23 00:54:38 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:54:41.847164 | orchestrator | 2025-05-23 00:54:41 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:54:41.848638 | orchestrator | 2025-05-23 00:54:41 | INFO  | Task be80a8b9-2033-43a5-8b6c-fddd93dc4a1b is in state STARTED 2025-05-23 00:54:41.851405 | orchestrator | 2025-05-23 00:54:41 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:54:41.852690 | orchestrator | 2025-05-23 00:54:41 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:54:41.852717 | orchestrator | 2025-05-23 00:54:41 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:54:44.900617 | orchestrator | 2025-05-23 00:54:44 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:54:44.902284 | orchestrator | 2025-05-23 00:54:44 | INFO  | Task be80a8b9-2033-43a5-8b6c-fddd93dc4a1b is in state STARTED 2025-05-23 00:54:44.902387 | orchestrator | 2025-05-23 00:54:44 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:54:44.904007 | orchestrator | 2025-05-23 00:54:44 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:54:44.904212 | orchestrator | 2025-05-23 00:54:44 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:54:47.958270 | orchestrator | 2025-05-23 00:54:47 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:54:47.959179 | orchestrator | 2025-05-23 00:54:47 | INFO  | Task be80a8b9-2033-43a5-8b6c-fddd93dc4a1b is in state STARTED 2025-05-23 00:54:47.960639 | orchestrator | 2025-05-23 00:54:47 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:54:47.961614 | orchestrator | 2025-05-23 00:54:47 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:54:47.961643 | orchestrator | 2025-05-23 00:54:47 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:54:51.015711 | orchestrator | 2025-05-23 00:54:51 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:54:51.017972 | orchestrator | 2025-05-23 00:54:51 | INFO  | Task be80a8b9-2033-43a5-8b6c-fddd93dc4a1b is in state STARTED 2025-05-23 00:54:51.023169 | orchestrator | 2025-05-23 00:54:51 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:54:51.025351 | orchestrator | 2025-05-23 00:54:51 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:54:51.025368 | orchestrator | 2025-05-23 00:54:51 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:54:54.070600 | orchestrator | 2025-05-23 00:54:54 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:54:54.077482 | orchestrator | 2025-05-23 00:54:54 | INFO  | Task be80a8b9-2033-43a5-8b6c-fddd93dc4a1b is in state STARTED 2025-05-23 00:54:54.079589 | orchestrator | 2025-05-23 00:54:54 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:54:54.079618 | orchestrator | 2025-05-23 00:54:54 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:54:54.079630 | orchestrator | 2025-05-23 00:54:54 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:54:57.126130 | orchestrator | 2025-05-23 00:54:57 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:54:57.127743 | orchestrator | 2025-05-23 00:54:57 | INFO  | Task be80a8b9-2033-43a5-8b6c-fddd93dc4a1b is in state SUCCESS 2025-05-23 00:54:57.129133 | orchestrator | 2025-05-23 00:54:57.129162 | orchestrator | None 2025-05-23 00:54:57.129170 | orchestrator | 2025-05-23 00:54:57.129179 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-23 00:54:57.129188 | orchestrator | 2025-05-23 00:54:57.129196 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-23 00:54:57.129204 | orchestrator | Friday 23 May 2025 00:52:44 +0000 (0:00:00.287) 0:00:00.287 ************ 2025-05-23 00:54:57.129212 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:54:57.129221 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:54:57.129229 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:54:57.129237 | orchestrator | 2025-05-23 00:54:57.129245 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-23 00:54:57.129253 | orchestrator | Friday 23 May 2025 00:52:45 +0000 (0:00:00.450) 0:00:00.738 ************ 2025-05-23 00:54:57.129262 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-05-23 00:54:57.129270 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-05-23 00:54:57.129278 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-05-23 00:54:57.129286 | orchestrator | 2025-05-23 00:54:57.129294 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-05-23 00:54:57.129302 | orchestrator | 2025-05-23 00:54:57.129310 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-23 00:54:57.129317 | orchestrator | Friday 23 May 2025 00:52:45 +0000 (0:00:00.264) 0:00:01.002 ************ 2025-05-23 00:54:57.129325 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:54:57.129333 | orchestrator | 2025-05-23 00:54:57.129341 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-05-23 00:54:57.129349 | orchestrator | Friday 23 May 2025 00:52:46 +0000 (0:00:00.570) 0:00:01.573 ************ 2025-05-23 00:54:57.129357 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-23 00:54:57.129364 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-23 00:54:57.129372 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-23 00:54:57.129380 | orchestrator | 2025-05-23 00:54:57.129387 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-05-23 00:54:57.129395 | orchestrator | Friday 23 May 2025 00:52:46 +0000 (0:00:00.678) 0:00:02.251 ************ 2025-05-23 00:54:57.129406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-23 00:54:57.129417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-23 00:54:57.129576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-23 00:54:57.129603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-23 00:54:57.129615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-23 00:54:57.129624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-23 00:54:57.129639 | orchestrator | 2025-05-23 00:54:57.129648 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-23 00:54:57.129656 | orchestrator | Friday 23 May 2025 00:52:48 +0000 (0:00:01.265) 0:00:03.516 ************ 2025-05-23 00:54:57.129663 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:54:57.129671 | orchestrator | 2025-05-23 00:54:57.129679 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-05-23 00:54:57.129687 | orchestrator | Friday 23 May 2025 00:52:48 +0000 (0:00:00.748) 0:00:04.265 ************ 2025-05-23 00:54:57.129702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-23 00:54:57.129716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-23 00:54:57.129725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-23 00:54:57.129735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-23 00:54:57.129756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-23 00:54:57.129770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-23 00:54:57.129780 | orchestrator | 2025-05-23 00:54:57.129789 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-05-23 00:54:57.129798 | orchestrator | Friday 23 May 2025 00:52:51 +0000 (0:00:02.658) 0:00:06.924 ************ 2025-05-23 00:54:57.129808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-23 00:54:57.129823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-23 00:54:57.129834 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:54:57.129848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-23 00:54:57.129863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-23 00:54:57.129873 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:54:57.129883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-23 00:54:57.129898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-23 00:54:57.129908 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:54:57.129917 | orchestrator | 2025-05-23 00:54:57.129926 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-05-23 00:54:57.129935 | orchestrator | Friday 23 May 2025 00:52:52 +0000 (0:00:01.480) 0:00:08.404 ************ 2025-05-23 00:54:57.129949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-23 00:54:57.130003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-23 00:54:57.130057 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:54:57.130071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-23 00:54:57.130088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-23 00:54:57.130097 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:54:57.130111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-23 00:54:57.130124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-23 00:54:57.130133 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:54:57.130141 | orchestrator | 2025-05-23 00:54:57.130149 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-05-23 00:54:57.130157 | orchestrator | Friday 23 May 2025 00:52:54 +0000 (0:00:01.610) 0:00:10.014 ************ 2025-05-23 00:54:57.130165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-23 00:54:57.130179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-23 00:54:57.130187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-23 00:54:57.130205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-23 00:54:57.130215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-23 00:54:57.130232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-23 00:54:57.130240 | orchestrator | 2025-05-23 00:54:57.130248 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-05-23 00:54:57.130256 | orchestrator | Friday 23 May 2025 00:52:57 +0000 (0:00:02.872) 0:00:12.887 ************ 2025-05-23 00:54:57.130264 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:54:57.130272 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:54:57.130280 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:54:57.130288 | orchestrator | 2025-05-23 00:54:57.130295 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-05-23 00:54:57.130303 | orchestrator | Friday 23 May 2025 00:53:02 +0000 (0:00:04.737) 0:00:17.624 ************ 2025-05-23 00:54:57.130311 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:54:57.130319 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:54:57.130326 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:54:57.130334 | orchestrator | 2025-05-23 00:54:57.130342 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-05-23 00:54:57.130350 | orchestrator | Friday 23 May 2025 00:53:03 +0000 (0:00:01.744) 0:00:19.369 ************ 2025-05-23 00:54:57.130366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-23 00:54:57.130379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-23 00:54:57.130392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-23 00:54:57.130401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-23 00:54:57.130415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-23 00:54:57.130428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-23 00:54:57.130478 | orchestrator | 2025-05-23 00:54:57.130488 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-23 00:54:57.130496 | orchestrator | Friday 23 May 2025 00:53:06 +0000 (0:00:02.454) 0:00:21.823 ************ 2025-05-23 00:54:57.130504 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:54:57.130512 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:54:57.130520 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:54:57.130527 | orchestrator | 2025-05-23 00:54:57.130535 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-23 00:54:57.130543 | orchestrator | Friday 23 May 2025 00:53:06 +0000 (0:00:00.461) 0:00:22.285 ************ 2025-05-23 00:54:57.130551 | orchestrator | 2025-05-23 00:54:57.130559 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-23 00:54:57.130567 | orchestrator | Friday 23 May 2025 00:53:07 +0000 (0:00:00.306) 0:00:22.591 ************ 2025-05-23 00:54:57.130575 | orchestrator | 2025-05-23 00:54:57.130582 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-23 00:54:57.130590 | orchestrator | Friday 23 May 2025 00:53:07 +0000 (0:00:00.103) 0:00:22.695 ************ 2025-05-23 00:54:57.130598 | orchestrator | 2025-05-23 00:54:57.130606 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-05-23 00:54:57.130613 | orchestrator | Friday 23 May 2025 00:53:07 +0000 (0:00:00.089) 0:00:22.784 ************ 2025-05-23 00:54:57.130621 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:54:57.130629 | orchestrator | 2025-05-23 00:54:57.130637 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-05-23 00:54:57.130644 | orchestrator | Friday 23 May 2025 00:53:07 +0000 (0:00:00.187) 0:00:22.971 ************ 2025-05-23 00:54:57.130652 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:54:57.130660 | orchestrator | 2025-05-23 00:54:57.130668 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-05-23 00:54:57.130676 | orchestrator | Friday 23 May 2025 00:53:07 +0000 (0:00:00.423) 0:00:23.395 ************ 2025-05-23 00:54:57.130683 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:54:57.130691 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:54:57.130699 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:54:57.130707 | orchestrator | 2025-05-23 00:54:57.130715 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-05-23 00:54:57.130723 | orchestrator | Friday 23 May 2025 00:53:43 +0000 (0:00:35.698) 0:00:59.093 ************ 2025-05-23 00:54:57.130731 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:54:57.130738 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:54:57.130746 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:54:57.130754 | orchestrator | 2025-05-23 00:54:57.130762 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-23 00:54:57.130770 | orchestrator | Friday 23 May 2025 00:54:44 +0000 (0:01:00.786) 0:01:59.880 ************ 2025-05-23 00:54:57.130778 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:54:57.130785 | orchestrator | 2025-05-23 00:54:57.130793 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-05-23 00:54:57.130801 | orchestrator | Friday 23 May 2025 00:54:45 +0000 (0:00:00.732) 0:02:00.612 ************ 2025-05-23 00:54:57.130809 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:54:57.130817 | orchestrator | 2025-05-23 00:54:57.130824 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-05-23 00:54:57.130832 | orchestrator | Friday 23 May 2025 00:54:47 +0000 (0:00:02.590) 0:02:03.203 ************ 2025-05-23 00:54:57.130845 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:54:57.130853 | orchestrator | 2025-05-23 00:54:57.130861 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-05-23 00:54:57.130868 | orchestrator | Friday 23 May 2025 00:54:50 +0000 (0:00:02.464) 0:02:05.668 ************ 2025-05-23 00:54:57.130876 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:54:57.130884 | orchestrator | 2025-05-23 00:54:57.130892 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-05-23 00:54:57.130900 | orchestrator | Friday 23 May 2025 00:54:53 +0000 (0:00:03.018) 0:02:08.686 ************ 2025-05-23 00:54:57.130907 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:54:57.130915 | orchestrator | 2025-05-23 00:54:57.130928 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 00:54:57.130938 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-23 00:54:57.130947 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-23 00:54:57.130955 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-23 00:54:57.130963 | orchestrator | 2025-05-23 00:54:57.130971 | orchestrator | 2025-05-23 00:54:57.131020 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-23 00:54:57.131029 | orchestrator | Friday 23 May 2025 00:54:56 +0000 (0:00:02.945) 0:02:11.631 ************ 2025-05-23 00:54:57.131037 | orchestrator | =============================================================================== 2025-05-23 00:54:57.131045 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 60.79s 2025-05-23 00:54:57.131058 | orchestrator | opensearch : Restart opensearch container ------------------------------ 35.70s 2025-05-23 00:54:57.131069 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 4.74s 2025-05-23 00:54:57.131077 | orchestrator | opensearch : Create new log retention policy ---------------------------- 3.02s 2025-05-23 00:54:57.131085 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.95s 2025-05-23 00:54:57.131093 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.87s 2025-05-23 00:54:57.131101 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.66s 2025-05-23 00:54:57.131108 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.59s 2025-05-23 00:54:57.131116 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.46s 2025-05-23 00:54:57.131124 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.45s 2025-05-23 00:54:57.131131 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.74s 2025-05-23 00:54:57.131139 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.61s 2025-05-23 00:54:57.131147 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.48s 2025-05-23 00:54:57.131155 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.27s 2025-05-23 00:54:57.131163 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.75s 2025-05-23 00:54:57.131192 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.73s 2025-05-23 00:54:57.131201 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.68s 2025-05-23 00:54:57.131209 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.57s 2025-05-23 00:54:57.131219 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.50s 2025-05-23 00:54:57.131231 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.46s 2025-05-23 00:54:57.131239 | orchestrator | 2025-05-23 00:54:57 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:54:57.131334 | orchestrator | 2025-05-23 00:54:57 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:54:57.131345 | orchestrator | 2025-05-23 00:54:57 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:55:00.180979 | orchestrator | 2025-05-23 00:55:00 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:55:00.181997 | orchestrator | 2025-05-23 00:55:00 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:55:00.184039 | orchestrator | 2025-05-23 00:55:00 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:55:00.184087 | orchestrator | 2025-05-23 00:55:00 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:55:03.232564 | orchestrator | 2025-05-23 00:55:03 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:55:03.233698 | orchestrator | 2025-05-23 00:55:03 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:55:03.235233 | orchestrator | 2025-05-23 00:55:03 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:55:03.235259 | orchestrator | 2025-05-23 00:55:03 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:55:06.290293 | orchestrator | 2025-05-23 00:55:06 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:55:06.291600 | orchestrator | 2025-05-23 00:55:06 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:55:06.292961 | orchestrator | 2025-05-23 00:55:06 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:55:06.292993 | orchestrator | 2025-05-23 00:55:06 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:55:09.345893 | orchestrator | 2025-05-23 00:55:09 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:55:09.345983 | orchestrator | 2025-05-23 00:55:09 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:55:09.348560 | orchestrator | 2025-05-23 00:55:09 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:55:09.348586 | orchestrator | 2025-05-23 00:55:09 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:55:12.385936 | orchestrator | 2025-05-23 00:55:12 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:55:12.391183 | orchestrator | 2025-05-23 00:55:12 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:55:12.393084 | orchestrator | 2025-05-23 00:55:12 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:55:12.393167 | orchestrator | 2025-05-23 00:55:12 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:55:15.442323 | orchestrator | 2025-05-23 00:55:15 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:55:15.443128 | orchestrator | 2025-05-23 00:55:15 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:55:15.447316 | orchestrator | 2025-05-23 00:55:15 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:55:15.447550 | orchestrator | 2025-05-23 00:55:15 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:55:18.494183 | orchestrator | 2025-05-23 00:55:18 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:55:18.495773 | orchestrator | 2025-05-23 00:55:18 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:55:18.496804 | orchestrator | 2025-05-23 00:55:18 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:55:18.496837 | orchestrator | 2025-05-23 00:55:18 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:55:21.548167 | orchestrator | 2025-05-23 00:55:21 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:55:21.548418 | orchestrator | 2025-05-23 00:55:21 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:55:21.549709 | orchestrator | 2025-05-23 00:55:21 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:55:21.549782 | orchestrator | 2025-05-23 00:55:21 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:55:24.605773 | orchestrator | 2025-05-23 00:55:24 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:55:24.606308 | orchestrator | 2025-05-23 00:55:24 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:55:24.608071 | orchestrator | 2025-05-23 00:55:24 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:55:24.608257 | orchestrator | 2025-05-23 00:55:24 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:55:27.657492 | orchestrator | 2025-05-23 00:55:27 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:55:27.659439 | orchestrator | 2025-05-23 00:55:27 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:55:27.661567 | orchestrator | 2025-05-23 00:55:27 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:55:27.661603 | orchestrator | 2025-05-23 00:55:27 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:55:30.710720 | orchestrator | 2025-05-23 00:55:30 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:55:30.710829 | orchestrator | 2025-05-23 00:55:30 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:55:30.711223 | orchestrator | 2025-05-23 00:55:30 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:55:30.711256 | orchestrator | 2025-05-23 00:55:30 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:55:33.771718 | orchestrator | 2025-05-23 00:55:33 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:55:33.773205 | orchestrator | 2025-05-23 00:55:33 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:55:33.775355 | orchestrator | 2025-05-23 00:55:33 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:55:33.775393 | orchestrator | 2025-05-23 00:55:33 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:55:36.822298 | orchestrator | 2025-05-23 00:55:36 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:55:36.823366 | orchestrator | 2025-05-23 00:55:36 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:55:36.824567 | orchestrator | 2025-05-23 00:55:36 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:55:36.824616 | orchestrator | 2025-05-23 00:55:36 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:55:39.882286 | orchestrator | 2025-05-23 00:55:39 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:55:39.883801 | orchestrator | 2025-05-23 00:55:39 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:55:39.885973 | orchestrator | 2025-05-23 00:55:39 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:55:39.886133 | orchestrator | 2025-05-23 00:55:39 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:55:42.944892 | orchestrator | 2025-05-23 00:55:42 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:55:42.945890 | orchestrator | 2025-05-23 00:55:42 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:55:42.947535 | orchestrator | 2025-05-23 00:55:42 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:55:42.947565 | orchestrator | 2025-05-23 00:55:42 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:55:45.998322 | orchestrator | 2025-05-23 00:55:45 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:55:45.999507 | orchestrator | 2025-05-23 00:55:45 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:55:46.000811 | orchestrator | 2025-05-23 00:55:46 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:55:46.001197 | orchestrator | 2025-05-23 00:55:46 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:55:49.055334 | orchestrator | 2025-05-23 00:55:49 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:55:49.057392 | orchestrator | 2025-05-23 00:55:49 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:55:49.058290 | orchestrator | 2025-05-23 00:55:49 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:55:49.058329 | orchestrator | 2025-05-23 00:55:49 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:55:52.102524 | orchestrator | 2025-05-23 00:55:52 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:55:52.104567 | orchestrator | 2025-05-23 00:55:52 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:55:52.106554 | orchestrator | 2025-05-23 00:55:52 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:55:52.106583 | orchestrator | 2025-05-23 00:55:52 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:55:55.153306 | orchestrator | 2025-05-23 00:55:55 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:55:55.154215 | orchestrator | 2025-05-23 00:55:55 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:55:55.155595 | orchestrator | 2025-05-23 00:55:55 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:55:55.155624 | orchestrator | 2025-05-23 00:55:55 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:55:58.198772 | orchestrator | 2025-05-23 00:55:58 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:55:58.200218 | orchestrator | 2025-05-23 00:55:58 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:55:58.201950 | orchestrator | 2025-05-23 00:55:58 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:55:58.201985 | orchestrator | 2025-05-23 00:55:58 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:56:01.251117 | orchestrator | 2025-05-23 00:56:01 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:56:01.253054 | orchestrator | 2025-05-23 00:56:01 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:56:01.254840 | orchestrator | 2025-05-23 00:56:01 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:56:01.255549 | orchestrator | 2025-05-23 00:56:01 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:56:04.299172 | orchestrator | 2025-05-23 00:56:04 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:56:04.299279 | orchestrator | 2025-05-23 00:56:04 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:56:04.302352 | orchestrator | 2025-05-23 00:56:04 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:56:04.302433 | orchestrator | 2025-05-23 00:56:04 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:56:07.345629 | orchestrator | 2025-05-23 00:56:07 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:56:07.346566 | orchestrator | 2025-05-23 00:56:07 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:56:07.348554 | orchestrator | 2025-05-23 00:56:07 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:56:07.348589 | orchestrator | 2025-05-23 00:56:07 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:56:10.396942 | orchestrator | 2025-05-23 00:56:10 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:56:10.397639 | orchestrator | 2025-05-23 00:56:10 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state STARTED 2025-05-23 00:56:10.397916 | orchestrator | 2025-05-23 00:56:10 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state STARTED 2025-05-23 00:56:10.398304 | orchestrator | 2025-05-23 00:56:10 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:56:13.448990 | orchestrator | 2025-05-23 00:56:13 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:56:13.454957 | orchestrator | 2025-05-23 00:56:13 | INFO  | Task b6f20f68-1e51-4496-99a8-1767328b7cfd is in state SUCCESS 2025-05-23 00:56:13.457516 | orchestrator | 2025-05-23 00:56:13.457562 | orchestrator | 2025-05-23 00:56:13.457575 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-05-23 00:56:13.457588 | orchestrator | 2025-05-23 00:56:13.457599 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-05-23 00:56:13.457610 | orchestrator | Friday 23 May 2025 00:52:45 +0000 (0:00:00.190) 0:00:00.190 ************ 2025-05-23 00:56:13.457621 | orchestrator | ok: [localhost] => { 2025-05-23 00:56:13.457633 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-05-23 00:56:13.457645 | orchestrator | } 2025-05-23 00:56:13.457656 | orchestrator | 2025-05-23 00:56:13.457667 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-05-23 00:56:13.457678 | orchestrator | Friday 23 May 2025 00:52:45 +0000 (0:00:00.035) 0:00:00.225 ************ 2025-05-23 00:56:13.457689 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-05-23 00:56:13.457701 | orchestrator | ...ignoring 2025-05-23 00:56:13.457712 | orchestrator | 2025-05-23 00:56:13.457723 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-05-23 00:56:13.457734 | orchestrator | Friday 23 May 2025 00:52:47 +0000 (0:00:02.535) 0:00:02.761 ************ 2025-05-23 00:56:13.457745 | orchestrator | skipping: [localhost] 2025-05-23 00:56:13.457755 | orchestrator | 2025-05-23 00:56:13.457766 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-05-23 00:56:13.457776 | orchestrator | Friday 23 May 2025 00:52:47 +0000 (0:00:00.037) 0:00:02.799 ************ 2025-05-23 00:56:13.457787 | orchestrator | ok: [localhost] 2025-05-23 00:56:13.457797 | orchestrator | 2025-05-23 00:56:13.457808 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-23 00:56:13.457818 | orchestrator | 2025-05-23 00:56:13.457854 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-23 00:56:13.457865 | orchestrator | Friday 23 May 2025 00:52:47 +0000 (0:00:00.137) 0:00:02.936 ************ 2025-05-23 00:56:13.457876 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.457886 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.457897 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.457907 | orchestrator | 2025-05-23 00:56:13.457918 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-23 00:56:13.457929 | orchestrator | Friday 23 May 2025 00:52:48 +0000 (0:00:00.392) 0:00:03.329 ************ 2025-05-23 00:56:13.457939 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-05-23 00:56:13.457951 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-05-23 00:56:13.457962 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-05-23 00:56:13.457972 | orchestrator | 2025-05-23 00:56:13.457983 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-05-23 00:56:13.457993 | orchestrator | 2025-05-23 00:56:13.458004 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-05-23 00:56:13.458015 | orchestrator | Friday 23 May 2025 00:52:48 +0000 (0:00:00.361) 0:00:03.690 ************ 2025-05-23 00:56:13.458078 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-23 00:56:13.458091 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-23 00:56:13.458103 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-23 00:56:13.458115 | orchestrator | 2025-05-23 00:56:13.458127 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-23 00:56:13.458139 | orchestrator | Friday 23 May 2025 00:52:49 +0000 (0:00:00.695) 0:00:04.385 ************ 2025-05-23 00:56:13.458152 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:56:13.458165 | orchestrator | 2025-05-23 00:56:13.458177 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-05-23 00:56:13.458190 | orchestrator | Friday 23 May 2025 00:52:49 +0000 (0:00:00.540) 0:00:04.926 ************ 2025-05-23 00:56:13.458258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-23 00:56:13.458303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-23 00:56:13.458335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-23 00:56:13.458351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-23 00:56:13.458372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-23 00:56:13.458414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-23 00:56:13.458428 | orchestrator | 2025-05-23 00:56:13.458439 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-05-23 00:56:13.458450 | orchestrator | Friday 23 May 2025 00:52:53 +0000 (0:00:04.097) 0:00:09.023 ************ 2025-05-23 00:56:13.458461 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.458473 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.458484 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:56:13.458495 | orchestrator | 2025-05-23 00:56:13.458505 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-05-23 00:56:13.458516 | orchestrator | Friday 23 May 2025 00:52:54 +0000 (0:00:00.823) 0:00:09.846 ************ 2025-05-23 00:56:13.458526 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.458537 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.458548 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:56:13.458558 | orchestrator | 2025-05-23 00:56:13.458569 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-05-23 00:56:13.458579 | orchestrator | Friday 23 May 2025 00:52:56 +0000 (0:00:01.765) 0:00:11.612 ************ 2025-05-23 00:56:13.458606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-23 00:56:13.458627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-23 00:56:13.458645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-23 00:56:13.458672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-23 00:56:13.458684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-23 00:56:13.458696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-23 00:56:13.458707 | orchestrator | 2025-05-23 00:56:13.458718 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-05-23 00:56:13.458729 | orchestrator | Friday 23 May 2025 00:53:03 +0000 (0:00:06.781) 0:00:18.394 ************ 2025-05-23 00:56:13.458739 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.458750 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.458761 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:56:13.458771 | orchestrator | 2025-05-23 00:56:13.458782 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-05-23 00:56:13.458793 | orchestrator | Friday 23 May 2025 00:53:04 +0000 (0:00:01.034) 0:00:19.429 ************ 2025-05-23 00:56:13.458803 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:56:13.458814 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:56:13.458824 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:56:13.458835 | orchestrator | 2025-05-23 00:56:13.458846 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-05-23 00:56:13.458857 | orchestrator | Friday 23 May 2025 00:53:12 +0000 (0:00:08.064) 0:00:27.493 ************ 2025-05-23 00:56:13.458881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-23 00:56:13.458901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-23 00:56:13.458914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-23 00:56:13.458939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-23 00:56:13.458959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-23 00:56:13.458970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-23 00:56:13.458981 | orchestrator | 2025-05-23 00:56:13.458992 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-05-23 00:56:13.459003 | orchestrator | Friday 23 May 2025 00:53:16 +0000 (0:00:04.219) 0:00:31.713 ************ 2025-05-23 00:56:13.459014 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:56:13.459024 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:56:13.459035 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:56:13.459046 | orchestrator | 2025-05-23 00:56:13.459057 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-05-23 00:56:13.459067 | orchestrator | Friday 23 May 2025 00:53:17 +0000 (0:00:01.159) 0:00:32.872 ************ 2025-05-23 00:56:13.459078 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.459089 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.459099 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.459110 | orchestrator | 2025-05-23 00:56:13.459121 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-05-23 00:56:13.459132 | orchestrator | Friday 23 May 2025 00:53:18 +0000 (0:00:00.447) 0:00:33.319 ************ 2025-05-23 00:56:13.459143 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.459153 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.459164 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.459174 | orchestrator | 2025-05-23 00:56:13.459185 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-05-23 00:56:13.459202 | orchestrator | Friday 23 May 2025 00:53:18 +0000 (0:00:00.281) 0:00:33.601 ************ 2025-05-23 00:56:13.459214 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-05-23 00:56:13.459225 | orchestrator | ...ignoring 2025-05-23 00:56:13.459254 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-05-23 00:56:13.459274 | orchestrator | ...ignoring 2025-05-23 00:56:13.459294 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-05-23 00:56:13.459314 | orchestrator | ...ignoring 2025-05-23 00:56:13.459333 | orchestrator | 2025-05-23 00:56:13.459351 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-05-23 00:56:13.459362 | orchestrator | Friday 23 May 2025 00:53:29 +0000 (0:00:11.188) 0:00:44.789 ************ 2025-05-23 00:56:13.459373 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.459411 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.459424 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.459435 | orchestrator | 2025-05-23 00:56:13.459445 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-05-23 00:56:13.459456 | orchestrator | Friday 23 May 2025 00:53:30 +0000 (0:00:00.691) 0:00:45.480 ************ 2025-05-23 00:56:13.459467 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.459478 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.459488 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.459499 | orchestrator | 2025-05-23 00:56:13.459510 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-05-23 00:56:13.459520 | orchestrator | Friday 23 May 2025 00:53:31 +0000 (0:00:00.566) 0:00:46.047 ************ 2025-05-23 00:56:13.459531 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.459542 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.459553 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.459563 | orchestrator | 2025-05-23 00:56:13.459581 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-05-23 00:56:13.459593 | orchestrator | Friday 23 May 2025 00:53:31 +0000 (0:00:00.445) 0:00:46.492 ************ 2025-05-23 00:56:13.459603 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.459614 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.459625 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.459635 | orchestrator | 2025-05-23 00:56:13.459646 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-05-23 00:56:13.459657 | orchestrator | Friday 23 May 2025 00:53:32 +0000 (0:00:00.598) 0:00:47.091 ************ 2025-05-23 00:56:13.459668 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.459678 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.459689 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.459699 | orchestrator | 2025-05-23 00:56:13.459710 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-05-23 00:56:13.459721 | orchestrator | Friday 23 May 2025 00:53:32 +0000 (0:00:00.612) 0:00:47.703 ************ 2025-05-23 00:56:13.459732 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.459742 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.459753 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.459764 | orchestrator | 2025-05-23 00:56:13.459775 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-23 00:56:13.459785 | orchestrator | Friday 23 May 2025 00:53:33 +0000 (0:00:00.543) 0:00:48.247 ************ 2025-05-23 00:56:13.459796 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.459806 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.459817 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-05-23 00:56:13.459828 | orchestrator | 2025-05-23 00:56:13.459839 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-05-23 00:56:13.459857 | orchestrator | Friday 23 May 2025 00:53:33 +0000 (0:00:00.501) 0:00:48.748 ************ 2025-05-23 00:56:13.459868 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:56:13.459878 | orchestrator | 2025-05-23 00:56:13.459889 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-05-23 00:56:13.459900 | orchestrator | Friday 23 May 2025 00:53:44 +0000 (0:00:10.348) 0:00:59.097 ************ 2025-05-23 00:56:13.459910 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.459921 | orchestrator | 2025-05-23 00:56:13.459932 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-23 00:56:13.459943 | orchestrator | Friday 23 May 2025 00:53:44 +0000 (0:00:00.129) 0:00:59.226 ************ 2025-05-23 00:56:13.459953 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.459965 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.459975 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.459986 | orchestrator | 2025-05-23 00:56:13.459996 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-05-23 00:56:13.460007 | orchestrator | Friday 23 May 2025 00:53:46 +0000 (0:00:01.926) 0:01:01.152 ************ 2025-05-23 00:56:13.460018 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:56:13.460028 | orchestrator | 2025-05-23 00:56:13.460039 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-05-23 00:56:13.460049 | orchestrator | Friday 23 May 2025 00:53:57 +0000 (0:00:10.957) 0:01:12.109 ************ 2025-05-23 00:56:13.460060 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for first MariaDB service port liveness (10 retries left). 2025-05-23 00:56:13.460071 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.460082 | orchestrator | 2025-05-23 00:56:13.460092 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-05-23 00:56:13.460103 | orchestrator | Friday 23 May 2025 00:54:04 +0000 (0:00:07.191) 0:01:19.301 ************ 2025-05-23 00:56:13.460114 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.460124 | orchestrator | 2025-05-23 00:56:13.460135 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-05-23 00:56:13.460145 | orchestrator | Friday 23 May 2025 00:54:06 +0000 (0:00:02.524) 0:01:21.825 ************ 2025-05-23 00:56:13.460156 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:56:13.460167 | orchestrator | 2025-05-23 00:56:13.460178 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-05-23 00:56:13.460189 | orchestrator | Friday 23 May 2025 00:54:06 +0000 (0:00:00.103) 0:01:21.929 ************ 2025-05-23 00:56:13.460199 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.460210 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.460220 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.460231 | orchestrator | 2025-05-23 00:56:13.460247 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-05-23 00:56:13.460258 | orchestrator | Friday 23 May 2025 00:54:07 +0000 (0:00:00.468) 0:01:22.397 ************ 2025-05-23 00:56:13.460268 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.460284 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:56:13.460303 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:56:13.460322 | orchestrator | 2025-05-23 00:56:13.460341 | orchestrator | RUNNING HANDLER [mariadb : Restart mariadb-clustercheck container] ************* 2025-05-23 00:56:13.460360 | orchestrator | Friday 23 May 2025 00:54:07 +0000 (0:00:00.439) 0:01:22.836 ************ 2025-05-23 00:56:13.460378 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-05-23 00:56:13.460449 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:56:13.460461 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:56:13.460471 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:56:13.460482 | orchestrator | 2025-05-23 00:56:13.460493 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-05-23 00:56:13.460503 | orchestrator | skipping: no hosts matched 2025-05-23 00:56:13.460522 | orchestrator | 2025-05-23 00:56:13.460533 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-23 00:56:13.460543 | orchestrator | 2025-05-23 00:56:13.460554 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-23 00:56:13.460564 | orchestrator | Friday 23 May 2025 00:54:22 +0000 (0:00:14.515) 0:01:37.352 ************ 2025-05-23 00:56:13.460575 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:56:13.460586 | orchestrator | 2025-05-23 00:56:13.460604 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-23 00:56:13.460615 | orchestrator | Friday 23 May 2025 00:54:40 +0000 (0:00:18.355) 0:01:55.707 ************ 2025-05-23 00:56:13.460626 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.460636 | orchestrator | 2025-05-23 00:56:13.460647 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-23 00:56:13.460658 | orchestrator | Friday 23 May 2025 00:55:01 +0000 (0:00:20.577) 0:02:16.285 ************ 2025-05-23 00:56:13.460669 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.460679 | orchestrator | 2025-05-23 00:56:13.460690 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-23 00:56:13.460701 | orchestrator | 2025-05-23 00:56:13.460711 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-23 00:56:13.460722 | orchestrator | Friday 23 May 2025 00:55:03 +0000 (0:00:02.495) 0:02:18.781 ************ 2025-05-23 00:56:13.460732 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:56:13.460743 | orchestrator | 2025-05-23 00:56:13.460754 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-23 00:56:13.460764 | orchestrator | Friday 23 May 2025 00:55:16 +0000 (0:00:13.075) 0:02:31.857 ************ 2025-05-23 00:56:13.460775 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.460786 | orchestrator | 2025-05-23 00:56:13.460796 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-23 00:56:13.460807 | orchestrator | Friday 23 May 2025 00:55:37 +0000 (0:00:20.519) 0:02:52.376 ************ 2025-05-23 00:56:13.460818 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.460828 | orchestrator | 2025-05-23 00:56:13.460839 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-05-23 00:56:13.460849 | orchestrator | 2025-05-23 00:56:13.460860 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-23 00:56:13.460871 | orchestrator | Friday 23 May 2025 00:55:39 +0000 (0:00:02.488) 0:02:54.864 ************ 2025-05-23 00:56:13.460881 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:56:13.460892 | orchestrator | 2025-05-23 00:56:13.460902 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-23 00:56:13.460913 | orchestrator | Friday 23 May 2025 00:55:51 +0000 (0:00:11.732) 0:03:06.597 ************ 2025-05-23 00:56:13.460924 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.460934 | orchestrator | 2025-05-23 00:56:13.460945 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-23 00:56:13.460956 | orchestrator | Friday 23 May 2025 00:55:56 +0000 (0:00:04.523) 0:03:11.121 ************ 2025-05-23 00:56:13.460967 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.460977 | orchestrator | 2025-05-23 00:56:13.460988 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-05-23 00:56:13.460999 | orchestrator | 2025-05-23 00:56:13.461010 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-05-23 00:56:13.461020 | orchestrator | Friday 23 May 2025 00:55:58 +0000 (0:00:02.542) 0:03:13.663 ************ 2025-05-23 00:56:13.461031 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:56:13.461042 | orchestrator | 2025-05-23 00:56:13.461052 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-05-23 00:56:13.461063 | orchestrator | Friday 23 May 2025 00:55:59 +0000 (0:00:00.738) 0:03:14.402 ************ 2025-05-23 00:56:13.461073 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.461084 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.461101 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:56:13.461112 | orchestrator | 2025-05-23 00:56:13.461123 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-05-23 00:56:13.461133 | orchestrator | Friday 23 May 2025 00:56:01 +0000 (0:00:02.546) 0:03:16.948 ************ 2025-05-23 00:56:13.461144 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.461155 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.461165 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:56:13.461176 | orchestrator | 2025-05-23 00:56:13.461186 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-05-23 00:56:13.461197 | orchestrator | Friday 23 May 2025 00:56:04 +0000 (0:00:02.116) 0:03:19.065 ************ 2025-05-23 00:56:13.461208 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.461218 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.461229 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:56:13.461239 | orchestrator | 2025-05-23 00:56:13.461250 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-05-23 00:56:13.461261 | orchestrator | Friday 23 May 2025 00:56:06 +0000 (0:00:02.354) 0:03:21.420 ************ 2025-05-23 00:56:13.461271 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.461288 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.461299 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:56:13.461311 | orchestrator | 2025-05-23 00:56:13.461331 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-05-23 00:56:13.461352 | orchestrator | Friday 23 May 2025 00:56:08 +0000 (0:00:02.205) 0:03:23.625 ************ 2025-05-23 00:56:13.461374 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.461413 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.461425 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.461435 | orchestrator | 2025-05-23 00:56:13.461446 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-05-23 00:56:13.461457 | orchestrator | Friday 23 May 2025 00:56:11 +0000 (0:00:03.333) 0:03:26.959 ************ 2025-05-23 00:56:13.461468 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.461478 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.461489 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.461499 | orchestrator | 2025-05-23 00:56:13.461510 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 00:56:13.461521 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-05-23 00:56:13.461532 | orchestrator | testbed-node-0 : ok=34  changed=17  unreachable=0 failed=0 skipped=8  rescued=0 ignored=1  2025-05-23 00:56:13.461552 | orchestrator | testbed-node-1 : ok=20  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=1  2025-05-23 00:56:13.461563 | orchestrator | testbed-node-2 : ok=20  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=1  2025-05-23 00:56:13.461574 | orchestrator | 2025-05-23 00:56:13.461585 | orchestrator | 2025-05-23 00:56:13.461596 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-23 00:56:13.461606 | orchestrator | Friday 23 May 2025 00:56:12 +0000 (0:00:00.406) 0:03:27.365 ************ 2025-05-23 00:56:13.461617 | orchestrator | =============================================================================== 2025-05-23 00:56:13.461628 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 41.10s 2025-05-23 00:56:13.461639 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 31.43s 2025-05-23 00:56:13.461649 | orchestrator | mariadb : Restart mariadb-clustercheck container ----------------------- 14.52s 2025-05-23 00:56:13.461660 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 11.73s 2025-05-23 00:56:13.461678 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.19s 2025-05-23 00:56:13.461688 | orchestrator | mariadb : Starting first MariaDB container ----------------------------- 10.96s 2025-05-23 00:56:13.461699 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.35s 2025-05-23 00:56:13.461710 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 8.06s 2025-05-23 00:56:13.461720 | orchestrator | mariadb : Wait for first MariaDB service port liveness ------------------ 7.19s 2025-05-23 00:56:13.461731 | orchestrator | mariadb : Copying over config.json files for services ------------------- 6.78s 2025-05-23 00:56:13.461741 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.98s 2025-05-23 00:56:13.461752 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.52s 2025-05-23 00:56:13.461763 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 4.22s 2025-05-23 00:56:13.461773 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 4.10s 2025-05-23 00:56:13.461784 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.33s 2025-05-23 00:56:13.461794 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.55s 2025-05-23 00:56:13.461805 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.54s 2025-05-23 00:56:13.461815 | orchestrator | Check MariaDB service --------------------------------------------------- 2.54s 2025-05-23 00:56:13.461826 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.52s 2025-05-23 00:56:13.461837 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.35s 2025-05-23 00:56:13.466810 | orchestrator | 2025-05-23 00:56:13 | INFO  | Task 9fe59257-9360-4a03-8fb9-115db9770c45 is in state SUCCESS 2025-05-23 00:56:13.468053 | orchestrator | 2025-05-23 00:56:13.468130 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-23 00:56:13.468145 | orchestrator | 2025-05-23 00:56:13.468157 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-05-23 00:56:13.468169 | orchestrator | 2025-05-23 00:56:13.468179 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-05-23 00:56:13.468190 | orchestrator | Friday 23 May 2025 00:43:16 +0000 (0:00:01.495) 0:00:01.495 ************ 2025-05-23 00:56:13.468202 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 00:56:13.468215 | orchestrator | 2025-05-23 00:56:13.468251 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-05-23 00:56:13.468264 | orchestrator | Friday 23 May 2025 00:43:17 +0000 (0:00:01.112) 0:00:02.608 ************ 2025-05-23 00:56:13.468275 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-23 00:56:13.468340 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-1) 2025-05-23 00:56:13.468355 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-2) 2025-05-23 00:56:13.468366 | orchestrator | 2025-05-23 00:56:13.468377 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-05-23 00:56:13.468433 | orchestrator | Friday 23 May 2025 00:43:17 +0000 (0:00:00.539) 0:00:03.147 ************ 2025-05-23 00:56:13.468446 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 00:56:13.468457 | orchestrator | 2025-05-23 00:56:13.468468 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-05-23 00:56:13.468478 | orchestrator | Friday 23 May 2025 00:43:18 +0000 (0:00:01.031) 0:00:04.179 ************ 2025-05-23 00:56:13.468504 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.468526 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.468537 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.468550 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.468664 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.468679 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.468691 | orchestrator | 2025-05-23 00:56:13.468703 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-05-23 00:56:13.468714 | orchestrator | Friday 23 May 2025 00:43:19 +0000 (0:00:01.284) 0:00:05.464 ************ 2025-05-23 00:56:13.468725 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.468735 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.468746 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.468756 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.468767 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.468777 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.468788 | orchestrator | 2025-05-23 00:56:13.468799 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-05-23 00:56:13.468809 | orchestrator | Friday 23 May 2025 00:43:20 +0000 (0:00:00.739) 0:00:06.204 ************ 2025-05-23 00:56:13.468820 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.468830 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.468841 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.468851 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.468862 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.468872 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.468883 | orchestrator | 2025-05-23 00:56:13.468893 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-05-23 00:56:13.468904 | orchestrator | Friday 23 May 2025 00:43:21 +0000 (0:00:01.122) 0:00:07.326 ************ 2025-05-23 00:56:13.468915 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.468925 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.468936 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.468946 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.468957 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.468967 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.468977 | orchestrator | 2025-05-23 00:56:13.468988 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-05-23 00:56:13.468999 | orchestrator | Friday 23 May 2025 00:43:22 +0000 (0:00:01.141) 0:00:08.467 ************ 2025-05-23 00:56:13.469009 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.469020 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.469030 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.469041 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.469051 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.469062 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.469072 | orchestrator | 2025-05-23 00:56:13.469083 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-05-23 00:56:13.469094 | orchestrator | Friday 23 May 2025 00:43:23 +0000 (0:00:00.706) 0:00:09.174 ************ 2025-05-23 00:56:13.469104 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.469115 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.469125 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.469156 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.469186 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.469211 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.469223 | orchestrator | 2025-05-23 00:56:13.469234 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-05-23 00:56:13.469245 | orchestrator | Friday 23 May 2025 00:43:24 +0000 (0:00:00.844) 0:00:10.018 ************ 2025-05-23 00:56:13.469255 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.469267 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.469278 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.469288 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.469299 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.469310 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.469320 | orchestrator | 2025-05-23 00:56:13.469331 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-05-23 00:56:13.469342 | orchestrator | Friday 23 May 2025 00:43:25 +0000 (0:00:01.027) 0:00:11.046 ************ 2025-05-23 00:56:13.469360 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.469371 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.469382 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.469413 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.469424 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.469434 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.469445 | orchestrator | 2025-05-23 00:56:13.469473 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-23 00:56:13.469484 | orchestrator | Friday 23 May 2025 00:43:26 +0000 (0:00:00.884) 0:00:11.930 ************ 2025-05-23 00:56:13.469495 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-23 00:56:13.469506 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-23 00:56:13.469517 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-23 00:56:13.469527 | orchestrator | 2025-05-23 00:56:13.469538 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-05-23 00:56:13.469549 | orchestrator | Friday 23 May 2025 00:43:27 +0000 (0:00:00.640) 0:00:12.571 ************ 2025-05-23 00:56:13.469559 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.469570 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.469581 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.469591 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.469602 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.469612 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.469623 | orchestrator | 2025-05-23 00:56:13.469640 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-05-23 00:56:13.469699 | orchestrator | Friday 23 May 2025 00:43:28 +0000 (0:00:01.779) 0:00:14.351 ************ 2025-05-23 00:56:13.469714 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-23 00:56:13.469726 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-23 00:56:13.469737 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-23 00:56:13.469748 | orchestrator | 2025-05-23 00:56:13.469759 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-05-23 00:56:13.469769 | orchestrator | Friday 23 May 2025 00:43:31 +0000 (0:00:02.740) 0:00:17.091 ************ 2025-05-23 00:56:13.469780 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-23 00:56:13.469791 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-23 00:56:13.469802 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-23 00:56:13.469954 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.469968 | orchestrator | 2025-05-23 00:56:13.469979 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-05-23 00:56:13.469990 | orchestrator | Friday 23 May 2025 00:43:32 +0000 (0:00:00.653) 0:00:17.744 ************ 2025-05-23 00:56:13.470003 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-23 00:56:13.470063 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-23 00:56:13.470077 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-23 00:56:13.470088 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.470099 | orchestrator | 2025-05-23 00:56:13.470110 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-05-23 00:56:13.470120 | orchestrator | Friday 23 May 2025 00:43:33 +0000 (0:00:00.775) 0:00:18.520 ************ 2025-05-23 00:56:13.470143 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-23 00:56:13.470177 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-23 00:56:13.470189 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-23 00:56:13.470201 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.470211 | orchestrator | 2025-05-23 00:56:13.470222 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-05-23 00:56:13.470243 | orchestrator | Friday 23 May 2025 00:43:33 +0000 (0:00:00.211) 0:00:18.732 ************ 2025-05-23 00:56:13.470256 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-23 00:43:29.515725', 'end': '2025-05-23 00:43:29.773759', 'delta': '0:00:00.258034', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-23 00:56:13.470277 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-23 00:43:30.336030', 'end': '2025-05-23 00:43:30.593801', 'delta': '0:00:00.257771', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-23 00:56:13.470289 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-23 00:43:31.133688', 'end': '2025-05-23 00:43:31.418688', 'delta': '0:00:00.285000', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-23 00:56:13.470301 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.470312 | orchestrator | 2025-05-23 00:56:13.470323 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-05-23 00:56:13.470342 | orchestrator | Friday 23 May 2025 00:43:33 +0000 (0:00:00.195) 0:00:18.927 ************ 2025-05-23 00:56:13.470353 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.470364 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.470375 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.470414 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.470443 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.470454 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.470465 | orchestrator | 2025-05-23 00:56:13.470476 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-05-23 00:56:13.470487 | orchestrator | Friday 23 May 2025 00:43:34 +0000 (0:00:01.379) 0:00:20.307 ************ 2025-05-23 00:56:13.470497 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.470592 | orchestrator | 2025-05-23 00:56:13.470604 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-05-23 00:56:13.470615 | orchestrator | Friday 23 May 2025 00:43:35 +0000 (0:00:00.642) 0:00:20.950 ************ 2025-05-23 00:56:13.470626 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.470636 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.470647 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.470671 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.470682 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.470692 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.470703 | orchestrator | 2025-05-23 00:56:13.470714 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-05-23 00:56:13.470725 | orchestrator | Friday 23 May 2025 00:43:36 +0000 (0:00:00.952) 0:00:21.902 ************ 2025-05-23 00:56:13.470735 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.470746 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.470794 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.470805 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.470830 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.470841 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.470851 | orchestrator | 2025-05-23 00:56:13.470862 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-23 00:56:13.470954 | orchestrator | Friday 23 May 2025 00:43:37 +0000 (0:00:01.160) 0:00:23.063 ************ 2025-05-23 00:56:13.470972 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.470992 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.471023 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.471041 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.471059 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.471075 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.471147 | orchestrator | 2025-05-23 00:56:13.471168 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-05-23 00:56:13.471186 | orchestrator | Friday 23 May 2025 00:43:38 +0000 (0:00:00.526) 0:00:23.589 ************ 2025-05-23 00:56:13.471227 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.471239 | orchestrator | 2025-05-23 00:56:13.471250 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-05-23 00:56:13.471261 | orchestrator | Friday 23 May 2025 00:43:38 +0000 (0:00:00.119) 0:00:23.708 ************ 2025-05-23 00:56:13.471283 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.472058 | orchestrator | 2025-05-23 00:56:13.472077 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-23 00:56:13.472091 | orchestrator | Friday 23 May 2025 00:43:38 +0000 (0:00:00.479) 0:00:24.187 ************ 2025-05-23 00:56:13.472104 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.472114 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.472125 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.472136 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.472146 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.472157 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.472167 | orchestrator | 2025-05-23 00:56:13.472192 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-05-23 00:56:13.472203 | orchestrator | Friday 23 May 2025 00:43:39 +0000 (0:00:00.762) 0:00:24.950 ************ 2025-05-23 00:56:13.472228 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.472239 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.472250 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.472261 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.472271 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.472282 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.472316 | orchestrator | 2025-05-23 00:56:13.472346 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-05-23 00:56:13.472358 | orchestrator | Friday 23 May 2025 00:43:40 +0000 (0:00:01.037) 0:00:25.988 ************ 2025-05-23 00:56:13.472368 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.472379 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.472419 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.472439 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.472456 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.472475 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.472488 | orchestrator | 2025-05-23 00:56:13.472499 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-05-23 00:56:13.472510 | orchestrator | Friday 23 May 2025 00:43:41 +0000 (0:00:00.892) 0:00:26.880 ************ 2025-05-23 00:56:13.472521 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.472531 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.472542 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.472552 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.472563 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.472573 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.472584 | orchestrator | 2025-05-23 00:56:13.472595 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-05-23 00:56:13.472606 | orchestrator | Friday 23 May 2025 00:43:42 +0000 (0:00:00.736) 0:00:27.617 ************ 2025-05-23 00:56:13.472675 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.472687 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.472698 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.472731 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.472743 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.472813 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.472824 | orchestrator | 2025-05-23 00:56:13.472835 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-05-23 00:56:13.472846 | orchestrator | Friday 23 May 2025 00:43:42 +0000 (0:00:00.742) 0:00:28.359 ************ 2025-05-23 00:56:13.472856 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.472867 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.472877 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.472888 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.472899 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.472909 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.472920 | orchestrator | 2025-05-23 00:56:13.472931 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-23 00:56:13.473020 | orchestrator | Friday 23 May 2025 00:43:43 +0000 (0:00:01.097) 0:00:29.456 ************ 2025-05-23 00:56:13.473032 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.473081 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.473094 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.473105 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.473115 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.473126 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.473136 | orchestrator | 2025-05-23 00:56:13.473147 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-05-23 00:56:13.473158 | orchestrator | Friday 23 May 2025 00:43:45 +0000 (0:00:01.231) 0:00:30.687 ************ 2025-05-23 00:56:13.473196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.473210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.473233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.473244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.473263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.473288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.473300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.473311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.473337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80dad4c8-3190-408a-8751-9f09dded29fb', 'scsi-SQEMU_QEMU_HARDDISK_80dad4c8-3190-408a-8751-9f09dded29fb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80dad4c8-3190-408a-8751-9f09dded29fb-part1', 'scsi-SQEMU_QEMU_HARDDISK_80dad4c8-3190-408a-8751-9f09dded29fb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80dad4c8-3190-408a-8751-9f09dded29fb-part14', 'scsi-SQEMU_QEMU_HARDDISK_80dad4c8-3190-408a-8751-9f09dded29fb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80dad4c8-3190-408a-8751-9f09dded29fb-part15', 'scsi-SQEMU_QEMU_HARDDISK_80dad4c8-3190-408a-8751-9f09dded29fb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80dad4c8-3190-408a-8751-9f09dded29fb-part16', 'scsi-SQEMU_QEMU_HARDDISK_80dad4c8-3190-408a-8751-9f09dded29fb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-23 00:56:13.473365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-23-00-02-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-23 00:56:13.473378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.473408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.473455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.473467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.473486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.473497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.473508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.473527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.473545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb22843a-e40d-4bb3-bbec-4f656ad57efb', 'scsi-SQEMU_QEMU_HARDDISK_eb22843a-e40d-4bb3-bbec-4f656ad57efb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb22843a-e40d-4bb3-bbec-4f656ad57efb-part1', 'scsi-SQEMU_QEMU_HARDDISK_eb22843a-e40d-4bb3-bbec-4f656ad57efb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb22843a-e40d-4bb3-bbec-4f656ad57efb-part14', 'scsi-SQEMU_QEMU_HARDDISK_eb22843a-e40d-4bb3-bbec-4f656ad57efb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb22843a-e40d-4bb3-bbec-4f656ad57efb-part15', 'scsi-SQEMU_QEMU_HARDDISK_eb22843a-e40d-4bb3-bbec-4f656ad57efb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb22843a-e40d-4bb3-bbec-4f656ad57efb-part16', 'scsi-SQEMU_QEMU_HARDDISK_eb22843a-e40d-4bb3-bbec-4f656ad57efb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-23 00:56:13.473558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-23-00-02-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-23 00:56:13.473576 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.473587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.473598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.473615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.473627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.473642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.473653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.473664 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.473837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.473851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.473879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7c04edb2-2eed-4aa0-a7e5-eb868ae2e4f6', 'scsi-SQEMU_QEMU_HARDDISK_7c04edb2-2eed-4aa0-a7e5-eb868ae2e4f6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7c04edb2-2eed-4aa0-a7e5-eb868ae2e4f6-part1', 'scsi-SQEMU_QEMU_HARDDISK_7c04edb2-2eed-4aa0-a7e5-eb868ae2e4f6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7c04edb2-2eed-4aa0-a7e5-eb868ae2e4f6-part14', 'scsi-SQEMU_QEMU_HARDDISK_7c04edb2-2eed-4aa0-a7e5-eb868ae2e4f6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7c04edb2-2eed-4aa0-a7e5-eb868ae2e4f6-part15', 'scsi-SQEMU_QEMU_HARDDISK_7c04edb2-2eed-4aa0-a7e5-eb868ae2e4f6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7c04edb2-2eed-4aa0-a7e5-eb868ae2e4f6-part16', 'scsi-SQEMU_QEMU_HARDDISK_7c04edb2-2eed-4aa0-a7e5-eb868ae2e4f6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-23 00:56:13.473897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-23-00-02-04-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-23 00:56:13.473909 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--17b95678--9240--5166--938b--e89fe6559568-osd--block--17b95678--9240--5166--938b--e89fe6559568', 'dm-uuid-LVM-ZZwSWRZHA2e2gvBfEGalnZuCgncqo9stGwibsev2qcB1RIlttmtzeRTYn2s1YPlP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.473921 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8fe28d0c--4762--50fd--9b7b--6f1bb47ff5c0-osd--block--8fe28d0c--4762--50fd--9b7b--6f1bb47ff5c0', 'dm-uuid-LVM-DNrsdmT8wqsoRqzmZbxiCLodm4pU7RBZWG6tPFDH6wj2dDw2rDAo1rNBYpypshxY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.473941 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.473953 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.473963 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.473993 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.474005 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.474057 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.474072 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.474089 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.474101 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--125adf16--eac9--5ada--96e7--bcd4f30a545d-osd--block--125adf16--eac9--5ada--96e7--bcd4f30a545d', 'dm-uuid-LVM-BCxmKIXJutF9vlKp4BKB1Q8l1VtN5qBeelKjY7Rw2QjAo5NEVnruVcRqGRclAHko'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.474112 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8bf3a31b--2d76--5988--bbd2--6800630d4c9a-osd--block--8bf3a31b--2d76--5988--bbd2--6800630d4c9a', 'dm-uuid-LVM-s1iYMsXZDLp7O33pcgRsDdeeRDpLn8e0FocQylEEkBSTHxQh86afwfAyqdvOeV3u'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.474131 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.474142 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.474153 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.474180 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e91133d1-5a4c-4c6b-aae9-a3102c4d2118', 'scsi-SQEMU_QEMU_HARDDISK_e91133d1-5a4c-4c6b-aae9-a3102c4d2118'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e91133d1-5a4c-4c6b-aae9-a3102c4d2118-part1', 'scsi-SQEMU_QEMU_HARDDISK_e91133d1-5a4c-4c6b-aae9-a3102c4d2118-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e91133d1-5a4c-4c6b-aae9-a3102c4d2118-part14', 'scsi-SQEMU_QEMU_HARDDISK_e91133d1-5a4c-4c6b-aae9-a3102c4d2118-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e91133d1-5a4c-4c6b-aae9-a3102c4d2118-part15', 'scsi-SQEMU_QEMU_HARDDISK_e91133d1-5a4c-4c6b-aae9-a3102c4d2118-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e91133d1-5a4c-4c6b-aae9-a3102c4d2118-part16', 'scsi-SQEMU_QEMU_HARDDISK_e91133d1-5a4c-4c6b-aae9-a3102c4d2118-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-23 00:56:13.474193 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.474246 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.474260 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--17b95678--9240--5166--938b--e89fe6559568-osd--block--17b95678--9240--5166--938b--e89fe6559568'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3GL2ip-Wdk8-SvYq-3j44-2Nl3-OHgp-XXXVcL', 'scsi-0QEMU_QEMU_HARDDISK_3c0d7b27-8ebd-4816-b389-8c3a005395e5', 'scsi-SQEMU_QEMU_HARDDISK_3c0d7b27-8ebd-4816-b389-8c3a005395e5'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-23 00:56:13.474272 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.474283 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.474302 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8fe28d0c--4762--50fd--9b7b--6f1bb47ff5c0-osd--block--8fe28d0c--4762--50fd--9b7b--6f1bb47ff5c0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-60aTsd-C4Zb-g0m4-tLi0-tj0N-by2m-9zrULV', 'scsi-0QEMU_QEMU_HARDDISK_eb878625-a80c-49f3-a757-e0a303c4dd75', 'scsi-SQEMU_QEMU_HARDDISK_eb878625-a80c-49f3-a757-e0a303c4dd75'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-23 00:56:13.474319 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.474331 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.474350 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab21c0a7-19ba-47fa-9bfa-a97fbae45af4', 'scsi-SQEMU_QEMU_HARDDISK_ab21c0a7-19ba-47fa-9bfa-a97fbae45af4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab21c0a7-19ba-47fa-9bfa-a97fbae45af4-part1', 'scsi-SQEMU_QEMU_HARDDISK_ab21c0a7-19ba-47fa-9bfa-a97fbae45af4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab21c0a7-19ba-47fa-9bfa-a97fbae45af4-part14', 'scsi-SQEMU_QEMU_HARDDISK_ab21c0a7-19ba-47fa-9bfa-a97fbae45af4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab21c0a7-19ba-47fa-9bfa-a97fbae45af4-part15', 'scsi-SQEMU_QEMU_HARDDISK_ab21c0a7-19ba-47fa-9bfa-a97fbae45af4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab21c0a7-19ba-47fa-9bfa-a97fbae45af4-part16', 'scsi-SQEMU_QEMU_HARDDISK_ab21c0a7-19ba-47fa-9bfa-a97fbae45af4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-23 00:56:13.474369 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_252b3cc1-c875-426d-9475-c1c0edf2ac3c', 'scsi-SQEMU_QEMU_HARDDISK_252b3cc1-c875-426d-9475-c1c0edf2ac3c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-23 00:56:13.474481 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--125adf16--eac9--5ada--96e7--bcd4f30a545d-osd--block--125adf16--eac9--5ada--96e7--bcd4f30a545d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-g0rkl1-FbNd-ZPh5-sRAc-RVz3-orO8-96jH9g', 'scsi-0QEMU_QEMU_HARDDISK_2fc59eae-0e0c-4c3b-84f8-905b4655c6b7', 'scsi-SQEMU_QEMU_HARDDISK_2fc59eae-0e0c-4c3b-84f8-905b4655c6b7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-23 00:56:13.474505 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-23-00-02-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-23 00:56:13.474525 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8bf3a31b--2d76--5988--bbd2--6800630d4c9a-osd--block--8bf3a31b--2d76--5988--bbd2--6800630d4c9a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yQ0zEK-kQ0w-r2eu-fsPZ-nI4k-HPl8-obn7zz', 'scsi-0QEMU_QEMU_HARDDISK_2ac02f21-3ef0-4f70-9ec3-b7448efc3652', 'scsi-SQEMU_QEMU_HARDDISK_2ac02f21-3ef0-4f70-9ec3-b7448efc3652'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-23 00:56:13.474537 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29f848a2-d495-4783-815a-7e69d4da9d2d', 'scsi-SQEMU_QEMU_HARDDISK_29f848a2-d495-4783-815a-7e69d4da9d2d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-23 00:56:13.474548 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.474560 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-23-00-02-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-23 00:56:13.474571 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.474591 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1c1d7620--81eb--54f7--8ffb--e9df7a8995e0-osd--block--1c1d7620--81eb--54f7--8ffb--e9df7a8995e0', 'dm-uuid-LVM-tRVA94W9EmSLkJeczqHMXIcyood4OFdpZpq2LYzfUgHH95d0i0dLrgDPWXuxNWTR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.474602 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dafe69f8--630b--5486--ba76--590e0b4d1820-osd--block--dafe69f8--630b--5486--ba76--590e0b4d1820', 'dm-uuid-LVM-ODjD6BqtmT1AFJHA3ZBheIhvhE3MvAXhtM0cOUyZ7GOIdZ7sJfg38u5r01vOEjsd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.474618 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.474637 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.474649 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.474660 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.474671 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.474682 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.474693 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.474704 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:56:13.474729 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c8efe3c1-6307-4e01-8bfc-afd4fa6a2572', 'scsi-SQEMU_QEMU_HARDDISK_c8efe3c1-6307-4e01-8bfc-afd4fa6a2572'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c8efe3c1-6307-4e01-8bfc-afd4fa6a2572-part1', 'scsi-SQEMU_QEMU_HARDDISK_c8efe3c1-6307-4e01-8bfc-afd4fa6a2572-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c8efe3c1-6307-4e01-8bfc-afd4fa6a2572-part14', 'scsi-SQEMU_QEMU_HARDDISK_c8efe3c1-6307-4e01-8bfc-afd4fa6a2572-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c8efe3c1-6307-4e01-8bfc-afd4fa6a2572-part15', 'scsi-SQEMU_QEMU_HARDDISK_c8efe3c1-6307-4e01-8bfc-afd4fa6a2572-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c8efe3c1-6307-4e01-8bfc-afd4fa6a2572-part16', 'scsi-SQEMU_QEMU_HARDDISK_c8efe3c1-6307-4e01-8bfc-afd4fa6a2572-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-23 00:56:13.474749 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--1c1d7620--81eb--54f7--8ffb--e9df7a8995e0-osd--block--1c1d7620--81eb--54f7--8ffb--e9df7a8995e0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0A2T45-X40H-s0d6-nUxd-bzRE-jAFi-fH6wa8', 'scsi-0QEMU_QEMU_HARDDISK_18473d69-2fd0-4937-9240-f5fad34c2ed7', 'scsi-SQEMU_QEMU_HARDDISK_18473d69-2fd0-4937-9240-f5fad34c2ed7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-23 00:56:13.474761 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--dafe69f8--630b--5486--ba76--590e0b4d1820-osd--block--dafe69f8--630b--5486--ba76--590e0b4d1820'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-nE4PFT-HwJj-uK0a-LG5n-nYOl-mJsJ-OKMseI', 'scsi-0QEMU_QEMU_HARDDISK_5f24398e-55ab-4e45-a360-e924ed2b4127', 'scsi-SQEMU_QEMU_HARDDISK_5f24398e-55ab-4e45-a360-e924ed2b4127'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-23 00:56:13.474779 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_329d29a6-e648-44c1-9803-5cc5abc56db6', 'scsi-SQEMU_QEMU_HARDDISK_329d29a6-e648-44c1-9803-5cc5abc56db6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-23 00:56:13.474792 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-23-00-02-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-23 00:56:13.474803 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.474820 | orchestrator | 2025-05-23 00:56:13.474835 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-05-23 00:56:13.474847 | orchestrator | Friday 23 May 2025 00:43:46 +0000 (0:00:01.643) 0:00:32.331 ************ 2025-05-23 00:56:13.474858 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.474869 | orchestrator | 2025-05-23 00:56:13.474880 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-05-23 00:56:13.474890 | orchestrator | Friday 23 May 2025 00:43:47 +0000 (0:00:00.286) 0:00:32.618 ************ 2025-05-23 00:56:13.474901 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.474912 | orchestrator | 2025-05-23 00:56:13.474923 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-05-23 00:56:13.474934 | orchestrator | Friday 23 May 2025 00:43:47 +0000 (0:00:00.154) 0:00:32.772 ************ 2025-05-23 00:56:13.474944 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.474955 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.474966 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.474976 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.474987 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.474998 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.475009 | orchestrator | 2025-05-23 00:56:13.475019 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-05-23 00:56:13.475030 | orchestrator | Friday 23 May 2025 00:43:48 +0000 (0:00:00.783) 0:00:33.556 ************ 2025-05-23 00:56:13.475041 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.475051 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.475062 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.475073 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.475083 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.475094 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.475104 | orchestrator | 2025-05-23 00:56:13.475115 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-05-23 00:56:13.475126 | orchestrator | Friday 23 May 2025 00:43:49 +0000 (0:00:01.346) 0:00:34.902 ************ 2025-05-23 00:56:13.475137 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.475147 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.475158 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.475169 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.475179 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.475190 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.475200 | orchestrator | 2025-05-23 00:56:13.475211 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-23 00:56:13.475222 | orchestrator | Friday 23 May 2025 00:43:50 +0000 (0:00:00.915) 0:00:35.818 ************ 2025-05-23 00:56:13.475232 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.475243 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.475254 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.475264 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.475275 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.475286 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.475296 | orchestrator | 2025-05-23 00:56:13.475307 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-23 00:56:13.475317 | orchestrator | Friday 23 May 2025 00:43:51 +0000 (0:00:01.632) 0:00:37.450 ************ 2025-05-23 00:56:13.475328 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.475338 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.475349 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.475359 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.475370 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.475380 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.475452 | orchestrator | 2025-05-23 00:56:13.475464 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-23 00:56:13.475475 | orchestrator | Friday 23 May 2025 00:43:52 +0000 (0:00:00.980) 0:00:38.431 ************ 2025-05-23 00:56:13.475493 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.475503 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.475514 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.475524 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.475535 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.475545 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.475556 | orchestrator | 2025-05-23 00:56:13.475567 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-23 00:56:13.475577 | orchestrator | Friday 23 May 2025 00:43:54 +0000 (0:00:01.196) 0:00:39.628 ************ 2025-05-23 00:56:13.475588 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.475598 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.475609 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.475619 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.475630 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.475640 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.475651 | orchestrator | 2025-05-23 00:56:13.475662 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-05-23 00:56:13.475672 | orchestrator | Friday 23 May 2025 00:43:55 +0000 (0:00:01.012) 0:00:40.640 ************ 2025-05-23 00:56:13.475683 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-23 00:56:13.475700 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-23 00:56:13.475711 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-23 00:56:13.475722 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-23 00:56:13.475733 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-23 00:56:13.475744 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-23 00:56:13.475754 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-23 00:56:13.475765 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-23 00:56:13.475775 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-23 00:56:13.475786 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.475797 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.475807 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-23 00:56:13.475818 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.475829 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-23 00:56:13.475844 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-23 00:56:13.475855 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-23 00:56:13.475866 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-23 00:56:13.475876 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-23 00:56:13.475885 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.475895 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-23 00:56:13.475904 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-23 00:56:13.475914 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.475924 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-23 00:56:13.475933 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.475942 | orchestrator | 2025-05-23 00:56:13.475952 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-05-23 00:56:13.475961 | orchestrator | Friday 23 May 2025 00:43:58 +0000 (0:00:03.405) 0:00:44.046 ************ 2025-05-23 00:56:13.475971 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-23 00:56:13.475980 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-23 00:56:13.475990 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-23 00:56:13.475999 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-23 00:56:13.476009 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-23 00:56:13.476024 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.476034 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-23 00:56:13.476043 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-23 00:56:13.476053 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-23 00:56:13.476062 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.476072 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-23 00:56:13.476081 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-23 00:56:13.476091 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-23 00:56:13.476100 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-23 00:56:13.476109 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-23 00:56:13.476119 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.476128 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-23 00:56:13.476138 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.476147 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-23 00:56:13.476156 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-23 00:56:13.476166 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-23 00:56:13.476175 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.476185 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-23 00:56:13.476194 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.476203 | orchestrator | 2025-05-23 00:56:13.476213 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-05-23 00:56:13.476223 | orchestrator | Friday 23 May 2025 00:44:00 +0000 (0:00:02.298) 0:00:46.345 ************ 2025-05-23 00:56:13.476232 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-23 00:56:13.476242 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-05-23 00:56:13.476251 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-23 00:56:13.476260 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-05-23 00:56:13.476270 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-05-23 00:56:13.476279 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-05-23 00:56:13.476289 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-05-23 00:56:13.476298 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-05-23 00:56:13.476307 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-23 00:56:13.476317 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-05-23 00:56:13.476326 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-05-23 00:56:13.476336 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-05-23 00:56:13.476345 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-05-23 00:56:13.476355 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-05-23 00:56:13.476364 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-05-23 00:56:13.476373 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-05-23 00:56:13.476404 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-05-23 00:56:13.476423 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-05-23 00:56:13.476440 | orchestrator | 2025-05-23 00:56:13.476455 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-05-23 00:56:13.476471 | orchestrator | Friday 23 May 2025 00:44:05 +0000 (0:00:04.851) 0:00:51.196 ************ 2025-05-23 00:56:13.476481 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-23 00:56:13.476491 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-23 00:56:13.476501 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-23 00:56:13.476510 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.476520 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-23 00:56:13.476536 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-23 00:56:13.476545 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-23 00:56:13.476555 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.476564 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-23 00:56:13.476573 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-23 00:56:13.476583 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-23 00:56:13.476597 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-23 00:56:13.476606 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.476616 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-23 00:56:13.476625 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-23 00:56:13.476634 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-23 00:56:13.476644 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-23 00:56:13.476653 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-23 00:56:13.476662 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.476672 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.476681 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-23 00:56:13.476690 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-23 00:56:13.476700 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-23 00:56:13.476709 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.476719 | orchestrator | 2025-05-23 00:56:13.476728 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-05-23 00:56:13.476738 | orchestrator | Friday 23 May 2025 00:44:07 +0000 (0:00:01.612) 0:00:52.808 ************ 2025-05-23 00:56:13.476747 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-23 00:56:13.476757 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-23 00:56:13.476766 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-23 00:56:13.476776 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.476785 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-23 00:56:13.476795 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-23 00:56:13.476804 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-23 00:56:13.476814 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-23 00:56:13.476823 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-23 00:56:13.476832 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-23 00:56:13.476842 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.476851 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-23 00:56:13.476861 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.476870 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-23 00:56:13.476880 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-23 00:56:13.476889 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-23 00:56:13.476898 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.476908 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-23 00:56:13.476917 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-23 00:56:13.476927 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.476936 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-23 00:56:13.476945 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-23 00:56:13.476955 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-23 00:56:13.476964 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.476974 | orchestrator | 2025-05-23 00:56:13.476983 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-05-23 00:56:13.476998 | orchestrator | Friday 23 May 2025 00:44:08 +0000 (0:00:01.185) 0:00:53.994 ************ 2025-05-23 00:56:13.477008 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-05-23 00:56:13.477018 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-23 00:56:13.477028 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-23 00:56:13.477037 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-23 00:56:13.477047 | orchestrator | ok: [testbed-node-1] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'}) 2025-05-23 00:56:13.477056 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-23 00:56:13.477066 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-23 00:56:13.477075 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-23 00:56:13.477085 | orchestrator | ok: [testbed-node-2] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'}) 2025-05-23 00:56:13.477099 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-23 00:56:13.477109 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-23 00:56:13.477119 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-23 00:56:13.477128 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-23 00:56:13.477138 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-23 00:56:13.477147 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-23 00:56:13.477157 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.477166 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.477176 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-23 00:56:13.477192 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-23 00:56:13.477202 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-23 00:56:13.477212 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.477221 | orchestrator | 2025-05-23 00:56:13.477231 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-05-23 00:56:13.477240 | orchestrator | Friday 23 May 2025 00:44:09 +0000 (0:00:01.275) 0:00:55.269 ************ 2025-05-23 00:56:13.477250 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.477260 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.477269 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.477279 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 00:56:13.477288 | orchestrator | 2025-05-23 00:56:13.477298 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-23 00:56:13.477309 | orchestrator | Friday 23 May 2025 00:44:11 +0000 (0:00:01.463) 0:00:56.733 ************ 2025-05-23 00:56:13.477318 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.477327 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.477337 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.477346 | orchestrator | 2025-05-23 00:56:13.477356 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-23 00:56:13.477366 | orchestrator | Friday 23 May 2025 00:44:11 +0000 (0:00:00.733) 0:00:57.466 ************ 2025-05-23 00:56:13.477375 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.477410 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.477420 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.477429 | orchestrator | 2025-05-23 00:56:13.477439 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-23 00:56:13.477448 | orchestrator | Friday 23 May 2025 00:44:12 +0000 (0:00:00.731) 0:00:58.198 ************ 2025-05-23 00:56:13.477458 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.477467 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.477477 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.477486 | orchestrator | 2025-05-23 00:56:13.477496 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-23 00:56:13.477505 | orchestrator | Friday 23 May 2025 00:44:13 +0000 (0:00:00.790) 0:00:58.988 ************ 2025-05-23 00:56:13.477515 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.477524 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.477534 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.477543 | orchestrator | 2025-05-23 00:56:13.477553 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-23 00:56:13.477562 | orchestrator | Friday 23 May 2025 00:44:14 +0000 (0:00:01.156) 0:01:00.144 ************ 2025-05-23 00:56:13.477572 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-23 00:56:13.477581 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-23 00:56:13.477591 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-23 00:56:13.477600 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.477610 | orchestrator | 2025-05-23 00:56:13.477619 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-23 00:56:13.477629 | orchestrator | Friday 23 May 2025 00:44:15 +0000 (0:00:00.754) 0:01:00.898 ************ 2025-05-23 00:56:13.477639 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-23 00:56:13.477648 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-23 00:56:13.477658 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-23 00:56:13.477667 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.477677 | orchestrator | 2025-05-23 00:56:13.477686 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-23 00:56:13.477696 | orchestrator | Friday 23 May 2025 00:44:16 +0000 (0:00:00.679) 0:01:01.578 ************ 2025-05-23 00:56:13.477705 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-23 00:56:13.477715 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-23 00:56:13.477724 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-23 00:56:13.477734 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.477743 | orchestrator | 2025-05-23 00:56:13.477753 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-23 00:56:13.477762 | orchestrator | Friday 23 May 2025 00:44:17 +0000 (0:00:00.933) 0:01:02.511 ************ 2025-05-23 00:56:13.477772 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.477781 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.477791 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.477800 | orchestrator | 2025-05-23 00:56:13.477810 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-23 00:56:13.477824 | orchestrator | Friday 23 May 2025 00:44:17 +0000 (0:00:00.514) 0:01:03.025 ************ 2025-05-23 00:56:13.477834 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-23 00:56:13.477844 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-23 00:56:13.477853 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-23 00:56:13.477863 | orchestrator | 2025-05-23 00:56:13.477873 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-23 00:56:13.477882 | orchestrator | Friday 23 May 2025 00:44:18 +0000 (0:00:01.115) 0:01:04.141 ************ 2025-05-23 00:56:13.477892 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.477901 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.477917 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.477926 | orchestrator | 2025-05-23 00:56:13.477936 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-23 00:56:13.477945 | orchestrator | Friday 23 May 2025 00:44:19 +0000 (0:00:00.594) 0:01:04.735 ************ 2025-05-23 00:56:13.477955 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.477964 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.477974 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.477983 | orchestrator | 2025-05-23 00:56:13.477997 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-23 00:56:13.478007 | orchestrator | Friday 23 May 2025 00:44:19 +0000 (0:00:00.696) 0:01:05.432 ************ 2025-05-23 00:56:13.478936 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-23 00:56:13.478961 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.478971 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-23 00:56:13.478981 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.478989 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-23 00:56:13.478997 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.479005 | orchestrator | 2025-05-23 00:56:13.479013 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-23 00:56:13.479021 | orchestrator | Friday 23 May 2025 00:44:20 +0000 (0:00:00.795) 0:01:06.228 ************ 2025-05-23 00:56:13.479030 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-23 00:56:13.479038 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.479047 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-23 00:56:13.479055 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.479063 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-23 00:56:13.479071 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.479079 | orchestrator | 2025-05-23 00:56:13.479088 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-23 00:56:13.479096 | orchestrator | Friday 23 May 2025 00:44:21 +0000 (0:00:00.706) 0:01:06.935 ************ 2025-05-23 00:56:13.479104 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-23 00:56:13.479112 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-23 00:56:13.479120 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-23 00:56:13.479128 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-23 00:56:13.479136 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-23 00:56:13.479144 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-23 00:56:13.479152 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-23 00:56:13.479160 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.479168 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-23 00:56:13.479176 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.479184 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-23 00:56:13.479192 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.479200 | orchestrator | 2025-05-23 00:56:13.479208 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-05-23 00:56:13.479216 | orchestrator | Friday 23 May 2025 00:44:22 +0000 (0:00:01.041) 0:01:07.976 ************ 2025-05-23 00:56:13.479224 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.479232 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.479240 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.479248 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.479256 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.479264 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.479281 | orchestrator | 2025-05-23 00:56:13.479289 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-05-23 00:56:13.479297 | orchestrator | Friday 23 May 2025 00:44:23 +0000 (0:00:01.185) 0:01:09.161 ************ 2025-05-23 00:56:13.479306 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-23 00:56:13.479314 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-23 00:56:13.479322 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-23 00:56:13.479331 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-23 00:56:13.479339 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-23 00:56:13.479347 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-23 00:56:13.479355 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-23 00:56:13.479363 | orchestrator | 2025-05-23 00:56:13.479371 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-05-23 00:56:13.479379 | orchestrator | Friday 23 May 2025 00:44:24 +0000 (0:00:01.022) 0:01:10.184 ************ 2025-05-23 00:56:13.479401 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-23 00:56:13.479496 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-23 00:56:13.479510 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-23 00:56:13.479518 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-23 00:56:13.479526 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-23 00:56:13.479535 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-23 00:56:13.479543 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-23 00:56:13.479551 | orchestrator | 2025-05-23 00:56:13.479559 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-23 00:56:13.479567 | orchestrator | Friday 23 May 2025 00:44:27 +0000 (0:00:02.436) 0:01:12.620 ************ 2025-05-23 00:56:13.479582 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 00:56:13.479592 | orchestrator | 2025-05-23 00:56:13.479600 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-23 00:56:13.479608 | orchestrator | Friday 23 May 2025 00:44:28 +0000 (0:00:01.147) 0:01:13.768 ************ 2025-05-23 00:56:13.479616 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.479625 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.479633 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.479641 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.479649 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.479657 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.479665 | orchestrator | 2025-05-23 00:56:13.479674 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-23 00:56:13.479682 | orchestrator | Friday 23 May 2025 00:44:29 +0000 (0:00:00.967) 0:01:14.736 ************ 2025-05-23 00:56:13.479690 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.479698 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.479706 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.479715 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.479723 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.479731 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.479739 | orchestrator | 2025-05-23 00:56:13.479747 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-23 00:56:13.479755 | orchestrator | Friday 23 May 2025 00:44:30 +0000 (0:00:01.262) 0:01:15.998 ************ 2025-05-23 00:56:13.479770 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.479779 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.479787 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.479795 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.479803 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.479811 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.479819 | orchestrator | 2025-05-23 00:56:13.479827 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-23 00:56:13.479835 | orchestrator | Friday 23 May 2025 00:44:31 +0000 (0:00:01.226) 0:01:17.224 ************ 2025-05-23 00:56:13.479843 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.479851 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.479860 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.479868 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.479876 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.479884 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.479892 | orchestrator | 2025-05-23 00:56:13.479900 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-23 00:56:13.479908 | orchestrator | Friday 23 May 2025 00:44:32 +0000 (0:00:01.063) 0:01:18.288 ************ 2025-05-23 00:56:13.479916 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.479924 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.479932 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.479940 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.479948 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.479956 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.479964 | orchestrator | 2025-05-23 00:56:13.479972 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-23 00:56:13.479980 | orchestrator | Friday 23 May 2025 00:44:34 +0000 (0:00:01.298) 0:01:19.586 ************ 2025-05-23 00:56:13.479988 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.479996 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.480004 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.480012 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.480020 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.480028 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.480036 | orchestrator | 2025-05-23 00:56:13.480044 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-23 00:56:13.480052 | orchestrator | Friday 23 May 2025 00:44:34 +0000 (0:00:00.725) 0:01:20.312 ************ 2025-05-23 00:56:13.480060 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.480068 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.480077 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.480085 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.480093 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.480100 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.480109 | orchestrator | 2025-05-23 00:56:13.480117 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-23 00:56:13.480125 | orchestrator | Friday 23 May 2025 00:44:36 +0000 (0:00:01.525) 0:01:21.839 ************ 2025-05-23 00:56:13.480133 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.480141 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.480149 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.480157 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.480165 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.480173 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.480181 | orchestrator | 2025-05-23 00:56:13.480189 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-23 00:56:13.480197 | orchestrator | Friday 23 May 2025 00:44:37 +0000 (0:00:01.214) 0:01:23.053 ************ 2025-05-23 00:56:13.480266 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.480277 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.480285 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.480293 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.480307 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.480315 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.480323 | orchestrator | 2025-05-23 00:56:13.480331 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-23 00:56:13.480339 | orchestrator | Friday 23 May 2025 00:44:38 +0000 (0:00:00.903) 0:01:23.956 ************ 2025-05-23 00:56:13.480347 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.480355 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.480363 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.480370 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.480378 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.480401 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.480409 | orchestrator | 2025-05-23 00:56:13.480417 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-23 00:56:13.480425 | orchestrator | Friday 23 May 2025 00:44:39 +0000 (0:00:01.007) 0:01:24.964 ************ 2025-05-23 00:56:13.480437 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.480445 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.480453 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.480461 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.480469 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.480477 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.480484 | orchestrator | 2025-05-23 00:56:13.480492 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-23 00:56:13.480500 | orchestrator | Friday 23 May 2025 00:44:41 +0000 (0:00:01.893) 0:01:26.858 ************ 2025-05-23 00:56:13.480508 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.480516 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.480524 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.480532 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.480539 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.480547 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.480555 | orchestrator | 2025-05-23 00:56:13.480563 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-23 00:56:13.480571 | orchestrator | Friday 23 May 2025 00:44:41 +0000 (0:00:00.595) 0:01:27.453 ************ 2025-05-23 00:56:13.480579 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.480587 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.480594 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.480602 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.480610 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.480618 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.480626 | orchestrator | 2025-05-23 00:56:13.480663 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-23 00:56:13.480671 | orchestrator | Friday 23 May 2025 00:44:42 +0000 (0:00:00.840) 0:01:28.294 ************ 2025-05-23 00:56:13.480679 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.480687 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.480694 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.480702 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.480710 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.480718 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.480726 | orchestrator | 2025-05-23 00:56:13.480733 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-23 00:56:13.480741 | orchestrator | Friday 23 May 2025 00:44:43 +0000 (0:00:00.627) 0:01:28.921 ************ 2025-05-23 00:56:13.480749 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.480757 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.480765 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.480773 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.480780 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.480788 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.480796 | orchestrator | 2025-05-23 00:56:13.480804 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-23 00:56:13.480817 | orchestrator | Friday 23 May 2025 00:44:44 +0000 (0:00:00.853) 0:01:29.775 ************ 2025-05-23 00:56:13.480847 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.480855 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.480863 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.480871 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.480879 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.480886 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.480894 | orchestrator | 2025-05-23 00:56:13.480902 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-23 00:56:13.480910 | orchestrator | Friday 23 May 2025 00:44:44 +0000 (0:00:00.697) 0:01:30.472 ************ 2025-05-23 00:56:13.480917 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.480925 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.480933 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.480941 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.480948 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.480956 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.480964 | orchestrator | 2025-05-23 00:56:13.480971 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-23 00:56:13.480979 | orchestrator | Friday 23 May 2025 00:44:46 +0000 (0:00:01.130) 0:01:31.603 ************ 2025-05-23 00:56:13.480987 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.480995 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.481002 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.481010 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.481017 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.481025 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.481033 | orchestrator | 2025-05-23 00:56:13.481040 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-23 00:56:13.481048 | orchestrator | Friday 23 May 2025 00:44:46 +0000 (0:00:00.600) 0:01:32.203 ************ 2025-05-23 00:56:13.481056 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.481064 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.481071 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.481079 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.481087 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.481094 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.481102 | orchestrator | 2025-05-23 00:56:13.481110 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-23 00:56:13.481188 | orchestrator | Friday 23 May 2025 00:44:47 +0000 (0:00:00.915) 0:01:33.118 ************ 2025-05-23 00:56:13.481205 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.481218 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.481230 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.481242 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.481254 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.481267 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.481279 | orchestrator | 2025-05-23 00:56:13.481292 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-23 00:56:13.481305 | orchestrator | Friday 23 May 2025 00:44:48 +0000 (0:00:00.847) 0:01:33.966 ************ 2025-05-23 00:56:13.481318 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.481330 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.481338 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.481346 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.481354 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.481362 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.481369 | orchestrator | 2025-05-23 00:56:13.481377 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-23 00:56:13.481437 | orchestrator | Friday 23 May 2025 00:44:49 +0000 (0:00:01.114) 0:01:35.081 ************ 2025-05-23 00:56:13.481446 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.481454 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.481472 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.481480 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.481488 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.481496 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.481504 | orchestrator | 2025-05-23 00:56:13.481512 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-23 00:56:13.481520 | orchestrator | Friday 23 May 2025 00:44:50 +0000 (0:00:00.740) 0:01:35.822 ************ 2025-05-23 00:56:13.481528 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.481535 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.481543 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.481551 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.481559 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.481568 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.481582 | orchestrator | 2025-05-23 00:56:13.481592 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-23 00:56:13.481600 | orchestrator | Friday 23 May 2025 00:44:51 +0000 (0:00:00.949) 0:01:36.771 ************ 2025-05-23 00:56:13.481608 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.481615 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.481623 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.481631 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.481638 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.481646 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.481654 | orchestrator | 2025-05-23 00:56:13.481661 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-23 00:56:13.481669 | orchestrator | Friday 23 May 2025 00:44:51 +0000 (0:00:00.655) 0:01:37.427 ************ 2025-05-23 00:56:13.481677 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.481685 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.481692 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.481700 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.481708 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.481716 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.481723 | orchestrator | 2025-05-23 00:56:13.481731 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-23 00:56:13.481739 | orchestrator | Friday 23 May 2025 00:44:52 +0000 (0:00:01.027) 0:01:38.454 ************ 2025-05-23 00:56:13.481747 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.481755 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.481762 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.481770 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.481778 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.481785 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.481793 | orchestrator | 2025-05-23 00:56:13.481801 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-23 00:56:13.481810 | orchestrator | Friday 23 May 2025 00:44:53 +0000 (0:00:00.586) 0:01:39.041 ************ 2025-05-23 00:56:13.481819 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.481828 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.481837 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.481846 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.481854 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.481864 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.481872 | orchestrator | 2025-05-23 00:56:13.481881 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-23 00:56:13.481889 | orchestrator | Friday 23 May 2025 00:44:54 +0000 (0:00:00.881) 0:01:39.922 ************ 2025-05-23 00:56:13.481897 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.481904 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.481911 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.481918 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.481930 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.481937 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.481944 | orchestrator | 2025-05-23 00:56:13.481952 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-23 00:56:13.481960 | orchestrator | Friday 23 May 2025 00:44:55 +0000 (0:00:00.601) 0:01:40.523 ************ 2025-05-23 00:56:13.481967 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.481975 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.481982 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.481990 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.481997 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.482004 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.482012 | orchestrator | 2025-05-23 00:56:13.482043 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-23 00:56:13.482051 | orchestrator | Friday 23 May 2025 00:44:55 +0000 (0:00:00.840) 0:01:41.364 ************ 2025-05-23 00:56:13.482059 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.482067 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.482136 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.482146 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.482154 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.482162 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.482169 | orchestrator | 2025-05-23 00:56:13.482177 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-23 00:56:13.482184 | orchestrator | Friday 23 May 2025 00:44:56 +0000 (0:00:00.677) 0:01:42.042 ************ 2025-05-23 00:56:13.482190 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.482197 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.482204 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.482210 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.482217 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.482223 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.482230 | orchestrator | 2025-05-23 00:56:13.482236 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-23 00:56:13.482243 | orchestrator | Friday 23 May 2025 00:44:57 +0000 (0:00:00.980) 0:01:43.023 ************ 2025-05-23 00:56:13.482249 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.482260 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.482267 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.482273 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.482280 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.482286 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.482293 | orchestrator | 2025-05-23 00:56:13.482300 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-23 00:56:13.482306 | orchestrator | Friday 23 May 2025 00:44:58 +0000 (0:00:00.675) 0:01:43.698 ************ 2025-05-23 00:56:13.482313 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-23 00:56:13.482320 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-23 00:56:13.482326 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.482333 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-23 00:56:13.482339 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-23 00:56:13.482345 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.482352 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-23 00:56:13.482358 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-23 00:56:13.482365 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.482371 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-23 00:56:13.482377 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-23 00:56:13.482400 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.482407 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-23 00:56:13.482414 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-23 00:56:13.482426 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.482432 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-23 00:56:13.482439 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-23 00:56:13.482445 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.482452 | orchestrator | 2025-05-23 00:56:13.482458 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-23 00:56:13.482465 | orchestrator | Friday 23 May 2025 00:44:59 +0000 (0:00:00.946) 0:01:44.645 ************ 2025-05-23 00:56:13.482471 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-23 00:56:13.482478 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-23 00:56:13.482495 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-23 00:56:13.482502 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-23 00:56:13.482509 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.482516 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-23 00:56:13.482522 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-23 00:56:13.482529 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.482535 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-23 00:56:13.482542 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-23 00:56:13.482548 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.482554 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-23 00:56:13.482561 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.482567 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-23 00:56:13.482574 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.482580 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-23 00:56:13.482587 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-23 00:56:13.482593 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.482600 | orchestrator | 2025-05-23 00:56:13.482628 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-23 00:56:13.482635 | orchestrator | Friday 23 May 2025 00:44:59 +0000 (0:00:00.614) 0:01:45.259 ************ 2025-05-23 00:56:13.482641 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.482648 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.482654 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.482661 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.482667 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.482674 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.482680 | orchestrator | 2025-05-23 00:56:13.482687 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-23 00:56:13.482693 | orchestrator | Friday 23 May 2025 00:45:00 +0000 (0:00:00.883) 0:01:46.142 ************ 2025-05-23 00:56:13.482700 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.482706 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.482713 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.482719 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.482726 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.482732 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.482738 | orchestrator | 2025-05-23 00:56:13.482745 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-23 00:56:13.482803 | orchestrator | Friday 23 May 2025 00:45:01 +0000 (0:00:00.621) 0:01:46.764 ************ 2025-05-23 00:56:13.482812 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.482819 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.482826 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.482832 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.482839 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.482845 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.482860 | orchestrator | 2025-05-23 00:56:13.482867 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-23 00:56:13.482873 | orchestrator | Friday 23 May 2025 00:45:02 +0000 (0:00:00.869) 0:01:47.633 ************ 2025-05-23 00:56:13.482880 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.482886 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.482893 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.482899 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.482906 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.482912 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.482919 | orchestrator | 2025-05-23 00:56:13.482925 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-23 00:56:13.482935 | orchestrator | Friday 23 May 2025 00:45:02 +0000 (0:00:00.596) 0:01:48.230 ************ 2025-05-23 00:56:13.482942 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.482949 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.482955 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.482962 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.482968 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.482974 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.482981 | orchestrator | 2025-05-23 00:56:13.482987 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-23 00:56:13.482994 | orchestrator | Friday 23 May 2025 00:45:03 +0000 (0:00:00.856) 0:01:49.086 ************ 2025-05-23 00:56:13.483000 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.483007 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.483013 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.483020 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.483026 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.483032 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.483039 | orchestrator | 2025-05-23 00:56:13.483046 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-23 00:56:13.483052 | orchestrator | Friday 23 May 2025 00:45:04 +0000 (0:00:00.638) 0:01:49.725 ************ 2025-05-23 00:56:13.483059 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-23 00:56:13.483065 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-23 00:56:13.483072 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-23 00:56:13.483078 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.483085 | orchestrator | 2025-05-23 00:56:13.483092 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-23 00:56:13.483098 | orchestrator | Friday 23 May 2025 00:45:04 +0000 (0:00:00.685) 0:01:50.410 ************ 2025-05-23 00:56:13.483105 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-23 00:56:13.483111 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-23 00:56:13.483118 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-23 00:56:13.483124 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.483131 | orchestrator | 2025-05-23 00:56:13.483137 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-23 00:56:13.483144 | orchestrator | Friday 23 May 2025 00:45:05 +0000 (0:00:00.435) 0:01:50.846 ************ 2025-05-23 00:56:13.483151 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-23 00:56:13.483157 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-23 00:56:13.483164 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-23 00:56:13.483170 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.483177 | orchestrator | 2025-05-23 00:56:13.483183 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-23 00:56:13.483190 | orchestrator | Friday 23 May 2025 00:45:05 +0000 (0:00:00.542) 0:01:51.388 ************ 2025-05-23 00:56:13.483196 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.483207 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.483213 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.483220 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.483226 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.483233 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.483239 | orchestrator | 2025-05-23 00:56:13.483245 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-23 00:56:13.483252 | orchestrator | Friday 23 May 2025 00:45:06 +0000 (0:00:00.636) 0:01:52.024 ************ 2025-05-23 00:56:13.483259 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-23 00:56:13.483265 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-23 00:56:13.483272 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.483278 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-23 00:56:13.483284 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.483291 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-23 00:56:13.483297 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.483304 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.483310 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-23 00:56:13.483317 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.483323 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-23 00:56:13.483330 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.483336 | orchestrator | 2025-05-23 00:56:13.483343 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-23 00:56:13.483349 | orchestrator | Friday 23 May 2025 00:45:07 +0000 (0:00:01.036) 0:01:53.061 ************ 2025-05-23 00:56:13.483355 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.483362 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.483368 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.483375 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.483381 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.483403 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.483410 | orchestrator | 2025-05-23 00:56:13.483468 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-23 00:56:13.483479 | orchestrator | Friday 23 May 2025 00:45:08 +0000 (0:00:00.612) 0:01:53.673 ************ 2025-05-23 00:56:13.483487 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.483495 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.483502 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.483509 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.483516 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.483524 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.483531 | orchestrator | 2025-05-23 00:56:13.483538 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-23 00:56:13.483549 | orchestrator | Friday 23 May 2025 00:45:09 +0000 (0:00:00.822) 0:01:54.496 ************ 2025-05-23 00:56:13.483561 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-23 00:56:13.483573 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.483585 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-23 00:56:13.483597 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.483614 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-23 00:56:13.483626 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.483636 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-23 00:56:13.483648 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.483660 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-23 00:56:13.483672 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.483685 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-23 00:56:13.483697 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.483705 | orchestrator | 2025-05-23 00:56:13.483712 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-23 00:56:13.483718 | orchestrator | Friday 23 May 2025 00:45:09 +0000 (0:00:00.761) 0:01:55.257 ************ 2025-05-23 00:56:13.483731 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.483738 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.483761 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.483769 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-23 00:56:13.483776 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.483783 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-23 00:56:13.483789 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.483796 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-23 00:56:13.483802 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.483809 | orchestrator | 2025-05-23 00:56:13.483815 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-23 00:56:13.483822 | orchestrator | Friday 23 May 2025 00:45:10 +0000 (0:00:00.821) 0:01:56.079 ************ 2025-05-23 00:56:13.483828 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-23 00:56:13.483835 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-23 00:56:13.483841 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-23 00:56:13.483848 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.483870 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-23 00:56:13.483877 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-23 00:56:13.483884 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-23 00:56:13.483890 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.483897 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-23 00:56:13.483903 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-23 00:56:13.483910 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-23 00:56:13.483917 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.483923 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-23 00:56:13.483930 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-23 00:56:13.483936 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-23 00:56:13.483942 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.483949 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-23 00:56:13.483956 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-23 00:56:13.483962 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-23 00:56:13.483969 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.483975 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-23 00:56:13.483982 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-23 00:56:13.483988 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-23 00:56:13.483994 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.484001 | orchestrator | 2025-05-23 00:56:13.484008 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-23 00:56:13.484014 | orchestrator | Friday 23 May 2025 00:45:12 +0000 (0:00:01.558) 0:01:57.637 ************ 2025-05-23 00:56:13.484021 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.484027 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.484034 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.484040 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.484047 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.484053 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.484060 | orchestrator | 2025-05-23 00:56:13.484066 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-23 00:56:13.484078 | orchestrator | Friday 23 May 2025 00:45:13 +0000 (0:00:01.208) 0:01:58.845 ************ 2025-05-23 00:56:13.484084 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.484091 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.484155 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.484165 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-23 00:56:13.484172 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.484178 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-23 00:56:13.484185 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.484191 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-23 00:56:13.484198 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.484204 | orchestrator | 2025-05-23 00:56:13.484211 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-23 00:56:13.484217 | orchestrator | Friday 23 May 2025 00:45:14 +0000 (0:00:01.260) 0:02:00.106 ************ 2025-05-23 00:56:13.484224 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.484230 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.484237 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.484243 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.484250 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.484256 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.484263 | orchestrator | 2025-05-23 00:56:13.484273 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-23 00:56:13.484280 | orchestrator | Friday 23 May 2025 00:45:15 +0000 (0:00:01.245) 0:02:01.352 ************ 2025-05-23 00:56:13.484286 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.484293 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.484299 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.484306 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.484312 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.484319 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.484325 | orchestrator | 2025-05-23 00:56:13.484332 | orchestrator | TASK [ceph-container-common : generate systemd ceph-mon target file] *********** 2025-05-23 00:56:13.484338 | orchestrator | Friday 23 May 2025 00:45:17 +0000 (0:00:01.224) 0:02:02.577 ************ 2025-05-23 00:56:13.484345 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:56:13.484351 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:56:13.484358 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:56:13.484364 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:56:13.484371 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:56:13.484377 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:56:13.484428 | orchestrator | 2025-05-23 00:56:13.484437 | orchestrator | TASK [ceph-container-common : enable ceph.target] ****************************** 2025-05-23 00:56:13.484444 | orchestrator | Friday 23 May 2025 00:45:18 +0000 (0:00:01.545) 0:02:04.123 ************ 2025-05-23 00:56:13.484450 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:56:13.484457 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:56:13.484463 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:56:13.484470 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:56:13.484476 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:56:13.484483 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:56:13.484489 | orchestrator | 2025-05-23 00:56:13.484496 | orchestrator | TASK [ceph-container-common : include prerequisites.yml] *********************** 2025-05-23 00:56:13.484502 | orchestrator | Friday 23 May 2025 00:45:20 +0000 (0:00:02.281) 0:02:06.404 ************ 2025-05-23 00:56:13.484509 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 00:56:13.484517 | orchestrator | 2025-05-23 00:56:13.484523 | orchestrator | TASK [ceph-container-common : stop lvmetad] ************************************ 2025-05-23 00:56:13.484530 | orchestrator | Friday 23 May 2025 00:45:22 +0000 (0:00:01.238) 0:02:07.643 ************ 2025-05-23 00:56:13.484537 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.484549 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.484556 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.484562 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.484569 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.484575 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.484582 | orchestrator | 2025-05-23 00:56:13.484588 | orchestrator | TASK [ceph-container-common : disable and mask lvmetad service] **************** 2025-05-23 00:56:13.484595 | orchestrator | Friday 23 May 2025 00:45:22 +0000 (0:00:00.692) 0:02:08.335 ************ 2025-05-23 00:56:13.484602 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.484608 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.484615 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.484621 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.484628 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.484634 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.484641 | orchestrator | 2025-05-23 00:56:13.484647 | orchestrator | TASK [ceph-container-common : remove ceph udev rules] ************************** 2025-05-23 00:56:13.484654 | orchestrator | Friday 23 May 2025 00:45:23 +0000 (0:00:00.910) 0:02:09.246 ************ 2025-05-23 00:56:13.484661 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-23 00:56:13.484667 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-23 00:56:13.484674 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-23 00:56:13.484681 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-23 00:56:13.484687 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-23 00:56:13.484694 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-23 00:56:13.484700 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-23 00:56:13.484707 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-23 00:56:13.484713 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-23 00:56:13.484720 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-23 00:56:13.484774 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-23 00:56:13.484783 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-23 00:56:13.484790 | orchestrator | 2025-05-23 00:56:13.484797 | orchestrator | TASK [ceph-container-common : ensure tmpfiles.d is present] ******************** 2025-05-23 00:56:13.484804 | orchestrator | Friday 23 May 2025 00:45:25 +0000 (0:00:01.730) 0:02:10.977 ************ 2025-05-23 00:56:13.484810 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:56:13.484817 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:56:13.484823 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:56:13.484830 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:56:13.484836 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:56:13.484843 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:56:13.484849 | orchestrator | 2025-05-23 00:56:13.484856 | orchestrator | TASK [ceph-container-common : restore certificates selinux context] ************ 2025-05-23 00:56:13.484862 | orchestrator | Friday 23 May 2025 00:45:26 +0000 (0:00:01.137) 0:02:12.114 ************ 2025-05-23 00:56:13.484869 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.484876 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.484882 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.484893 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.484900 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.484906 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.484913 | orchestrator | 2025-05-23 00:56:13.484919 | orchestrator | TASK [ceph-container-common : include registry.yml] **************************** 2025-05-23 00:56:13.484931 | orchestrator | Friday 23 May 2025 00:45:27 +0000 (0:00:01.050) 0:02:13.164 ************ 2025-05-23 00:56:13.484937 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.484944 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.484950 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.484957 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.484963 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.484969 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.484975 | orchestrator | 2025-05-23 00:56:13.484981 | orchestrator | TASK [ceph-container-common : include fetch_image.yml] ************************* 2025-05-23 00:56:13.484987 | orchestrator | Friday 23 May 2025 00:45:28 +0000 (0:00:00.635) 0:02:13.800 ************ 2025-05-23 00:56:13.484994 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 00:56:13.485000 | orchestrator | 2025-05-23 00:56:13.485006 | orchestrator | TASK [ceph-container-common : pulling registry.osism.tech/osism/ceph-daemon:17.2.7 image] *** 2025-05-23 00:56:13.485012 | orchestrator | Friday 23 May 2025 00:45:29 +0000 (0:00:01.239) 0:02:15.040 ************ 2025-05-23 00:56:13.485018 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.485024 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.485030 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.485036 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.485042 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.485048 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.485064 | orchestrator | 2025-05-23 00:56:13.485070 | orchestrator | TASK [ceph-container-common : pulling alertmanager/prometheus/grafana container images] *** 2025-05-23 00:56:13.485076 | orchestrator | Friday 23 May 2025 00:46:18 +0000 (0:00:48.591) 0:03:03.631 ************ 2025-05-23 00:56:13.485082 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-23 00:56:13.485088 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-23 00:56:13.485094 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-23 00:56:13.485100 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.485107 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-23 00:56:13.485113 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-23 00:56:13.485119 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-23 00:56:13.485125 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.485131 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-23 00:56:13.485137 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-23 00:56:13.485143 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-23 00:56:13.485149 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.485155 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-23 00:56:13.485161 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-23 00:56:13.485167 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-23 00:56:13.485173 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.485179 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-23 00:56:13.485185 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-23 00:56:13.485191 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-23 00:56:13.485197 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.485203 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-23 00:56:13.485209 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-23 00:56:13.485232 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-23 00:56:13.485239 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.485245 | orchestrator | 2025-05-23 00:56:13.485251 | orchestrator | TASK [ceph-container-common : pulling node-exporter container image] *********** 2025-05-23 00:56:13.485257 | orchestrator | Friday 23 May 2025 00:46:19 +0000 (0:00:01.051) 0:03:04.683 ************ 2025-05-23 00:56:13.485263 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.485311 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.485320 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.485326 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.485332 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.485338 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.485345 | orchestrator | 2025-05-23 00:56:13.485351 | orchestrator | TASK [ceph-container-common : export local ceph dev image] ********************* 2025-05-23 00:56:13.485357 | orchestrator | Friday 23 May 2025 00:46:19 +0000 (0:00:00.779) 0:03:05.463 ************ 2025-05-23 00:56:13.485363 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.485369 | orchestrator | 2025-05-23 00:56:13.485375 | orchestrator | TASK [ceph-container-common : copy ceph dev image file] ************************ 2025-05-23 00:56:13.485381 | orchestrator | Friday 23 May 2025 00:46:20 +0000 (0:00:00.155) 0:03:05.618 ************ 2025-05-23 00:56:13.485404 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.485410 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.485416 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.485422 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.485428 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.485434 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.485440 | orchestrator | 2025-05-23 00:56:13.485450 | orchestrator | TASK [ceph-container-common : load ceph dev image] ***************************** 2025-05-23 00:56:13.485457 | orchestrator | Friday 23 May 2025 00:46:21 +0000 (0:00:01.077) 0:03:06.695 ************ 2025-05-23 00:56:13.485463 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.485469 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.485475 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.485480 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.485486 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.485492 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.485498 | orchestrator | 2025-05-23 00:56:13.485504 | orchestrator | TASK [ceph-container-common : remove tmp ceph dev image file] ****************** 2025-05-23 00:56:13.485510 | orchestrator | Friday 23 May 2025 00:46:22 +0000 (0:00:00.831) 0:03:07.527 ************ 2025-05-23 00:56:13.485516 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.485523 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.485528 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.485534 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.485540 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.485546 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.485552 | orchestrator | 2025-05-23 00:56:13.485558 | orchestrator | TASK [ceph-container-common : get ceph version] ******************************** 2025-05-23 00:56:13.485564 | orchestrator | Friday 23 May 2025 00:46:23 +0000 (0:00:01.300) 0:03:08.828 ************ 2025-05-23 00:56:13.485570 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.485576 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.485582 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.485593 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.485603 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.485614 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.485625 | orchestrator | 2025-05-23 00:56:13.485635 | orchestrator | TASK [ceph-container-common : set_fact ceph_version ceph_version.stdout.split] *** 2025-05-23 00:56:13.485644 | orchestrator | Friday 23 May 2025 00:46:25 +0000 (0:00:01.778) 0:03:10.606 ************ 2025-05-23 00:56:13.485654 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.485663 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.485678 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.485687 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.485696 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.485706 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.485716 | orchestrator | 2025-05-23 00:56:13.485726 | orchestrator | TASK [ceph-container-common : include release.yml] ***************************** 2025-05-23 00:56:13.485737 | orchestrator | Friday 23 May 2025 00:46:25 +0000 (0:00:00.737) 0:03:11.344 ************ 2025-05-23 00:56:13.485746 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 00:56:13.485753 | orchestrator | 2025-05-23 00:56:13.485759 | orchestrator | TASK [ceph-container-common : set_fact ceph_release jewel] ********************* 2025-05-23 00:56:13.485765 | orchestrator | Friday 23 May 2025 00:46:26 +0000 (0:00:01.101) 0:03:12.445 ************ 2025-05-23 00:56:13.485771 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.485777 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.485784 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.485790 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.485796 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.485801 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.485807 | orchestrator | 2025-05-23 00:56:13.485813 | orchestrator | TASK [ceph-container-common : set_fact ceph_release kraken] ******************** 2025-05-23 00:56:13.485820 | orchestrator | Friday 23 May 2025 00:46:27 +0000 (0:00:00.786) 0:03:13.231 ************ 2025-05-23 00:56:13.485826 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.485832 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.485838 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.485844 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.485850 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.485856 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.485862 | orchestrator | 2025-05-23 00:56:13.485868 | orchestrator | TASK [ceph-container-common : set_fact ceph_release luminous] ****************** 2025-05-23 00:56:13.485874 | orchestrator | Friday 23 May 2025 00:46:28 +0000 (0:00:00.847) 0:03:14.078 ************ 2025-05-23 00:56:13.485880 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.485886 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.485892 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.485898 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.485904 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.485910 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.485916 | orchestrator | 2025-05-23 00:56:13.485922 | orchestrator | TASK [ceph-container-common : set_fact ceph_release mimic] ********************* 2025-05-23 00:56:13.485928 | orchestrator | Friday 23 May 2025 00:46:29 +0000 (0:00:00.634) 0:03:14.713 ************ 2025-05-23 00:56:13.485934 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.485940 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.485947 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.485954 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.486013 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.486043 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.486050 | orchestrator | 2025-05-23 00:56:13.486057 | orchestrator | TASK [ceph-container-common : set_fact ceph_release nautilus] ****************** 2025-05-23 00:56:13.486064 | orchestrator | Friday 23 May 2025 00:46:30 +0000 (0:00:01.060) 0:03:15.773 ************ 2025-05-23 00:56:13.486071 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.486078 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.486085 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.486091 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.486098 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.486105 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.486112 | orchestrator | 2025-05-23 00:56:13.486119 | orchestrator | TASK [ceph-container-common : set_fact ceph_release octopus] ******************* 2025-05-23 00:56:13.486132 | orchestrator | Friday 23 May 2025 00:46:31 +0000 (0:00:00.859) 0:03:16.633 ************ 2025-05-23 00:56:13.486140 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.486147 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.486154 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.486160 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.486171 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.486177 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.486183 | orchestrator | 2025-05-23 00:56:13.486189 | orchestrator | TASK [ceph-container-common : set_fact ceph_release pacific] ******************* 2025-05-23 00:56:13.486195 | orchestrator | Friday 23 May 2025 00:46:32 +0000 (0:00:01.038) 0:03:17.671 ************ 2025-05-23 00:56:13.486201 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.486207 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.486213 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.486220 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.486226 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.486232 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.486238 | orchestrator | 2025-05-23 00:56:13.486244 | orchestrator | TASK [ceph-container-common : set_fact ceph_release quincy] ******************** 2025-05-23 00:56:13.486250 | orchestrator | Friday 23 May 2025 00:46:32 +0000 (0:00:00.692) 0:03:18.364 ************ 2025-05-23 00:56:13.486256 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.486262 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.486268 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.486274 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.486280 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.486286 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.486292 | orchestrator | 2025-05-23 00:56:13.486298 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-23 00:56:13.486304 | orchestrator | Friday 23 May 2025 00:46:34 +0000 (0:00:01.319) 0:03:19.683 ************ 2025-05-23 00:56:13.486311 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 00:56:13.486317 | orchestrator | 2025-05-23 00:56:13.486323 | orchestrator | TASK [ceph-config : create ceph initial directories] *************************** 2025-05-23 00:56:13.486339 | orchestrator | Friday 23 May 2025 00:46:35 +0000 (0:00:01.315) 0:03:20.999 ************ 2025-05-23 00:56:13.486346 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-05-23 00:56:13.486352 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-05-23 00:56:13.486358 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-05-23 00:56:13.486364 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-05-23 00:56:13.486370 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-05-23 00:56:13.486376 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-05-23 00:56:13.486395 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-05-23 00:56:13.486401 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-05-23 00:56:13.486407 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-05-23 00:56:13.486414 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-05-23 00:56:13.486420 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-05-23 00:56:13.486426 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-05-23 00:56:13.486432 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-05-23 00:56:13.486438 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-05-23 00:56:13.486444 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-05-23 00:56:13.486450 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-05-23 00:56:13.486457 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-05-23 00:56:13.486463 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-05-23 00:56:13.486476 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-05-23 00:56:13.486482 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-05-23 00:56:13.486488 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-05-23 00:56:13.486494 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-05-23 00:56:13.486500 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-05-23 00:56:13.486520 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-05-23 00:56:13.486526 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-05-23 00:56:13.486532 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-05-23 00:56:13.486538 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-05-23 00:56:13.486544 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-05-23 00:56:13.486550 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-05-23 00:56:13.486556 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-05-23 00:56:13.486562 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-05-23 00:56:13.486619 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-05-23 00:56:13.486628 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-05-23 00:56:13.486634 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-05-23 00:56:13.486640 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-05-23 00:56:13.486646 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-23 00:56:13.486652 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-05-23 00:56:13.486658 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-23 00:56:13.486664 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-05-23 00:56:13.486670 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-05-23 00:56:13.486676 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-05-23 00:56:13.486682 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-23 00:56:13.486692 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-05-23 00:56:13.486699 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-23 00:56:13.486705 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-05-23 00:56:13.486711 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-23 00:56:13.486717 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-23 00:56:13.486723 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-05-23 00:56:13.486729 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-23 00:56:13.486735 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-23 00:56:13.486741 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-23 00:56:13.486747 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-23 00:56:13.486753 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-23 00:56:13.486759 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-23 00:56:13.486765 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-23 00:56:13.486771 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-23 00:56:13.486777 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-23 00:56:13.486783 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-23 00:56:13.486790 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-23 00:56:13.486800 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-23 00:56:13.486806 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-23 00:56:13.486813 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-23 00:56:13.486818 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-23 00:56:13.486824 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-23 00:56:13.486831 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-23 00:56:13.486837 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-23 00:56:13.486843 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-23 00:56:13.486849 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-23 00:56:13.486855 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-23 00:56:13.486861 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-23 00:56:13.486867 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-23 00:56:13.486873 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-23 00:56:13.486879 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-23 00:56:13.486885 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-05-23 00:56:13.486891 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-23 00:56:13.486897 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-05-23 00:56:13.486904 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-23 00:56:13.486910 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-23 00:56:13.486916 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-23 00:56:13.486922 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-05-23 00:56:13.486928 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-05-23 00:56:13.486934 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-05-23 00:56:13.486941 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-05-23 00:56:13.486947 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-05-23 00:56:13.486953 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-23 00:56:13.486959 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-05-23 00:56:13.486965 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-05-23 00:56:13.486971 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-05-23 00:56:13.486977 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-05-23 00:56:13.487025 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-05-23 00:56:13.487034 | orchestrator | 2025-05-23 00:56:13.487040 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-23 00:56:13.487050 | orchestrator | Friday 23 May 2025 00:46:41 +0000 (0:00:06.303) 0:03:27.303 ************ 2025-05-23 00:56:13.487060 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.487072 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.487086 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.487096 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 00:56:13.487106 | orchestrator | 2025-05-23 00:56:13.487117 | orchestrator | TASK [ceph-config : create rados gateway instance directories] ***************** 2025-05-23 00:56:13.487127 | orchestrator | Friday 23 May 2025 00:46:42 +0000 (0:00:01.181) 0:03:28.485 ************ 2025-05-23 00:56:13.487143 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-23 00:56:13.487160 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-23 00:56:13.487167 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-23 00:56:13.487173 | orchestrator | 2025-05-23 00:56:13.487179 | orchestrator | TASK [ceph-config : generate environment file] ********************************* 2025-05-23 00:56:13.487185 | orchestrator | Friday 23 May 2025 00:46:44 +0000 (0:00:01.081) 0:03:29.566 ************ 2025-05-23 00:56:13.487191 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-23 00:56:13.487197 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-23 00:56:13.487203 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-23 00:56:13.487209 | orchestrator | 2025-05-23 00:56:13.487215 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-23 00:56:13.487221 | orchestrator | Friday 23 May 2025 00:46:45 +0000 (0:00:01.286) 0:03:30.853 ************ 2025-05-23 00:56:13.487227 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.487233 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.487239 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.487246 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.487252 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.487258 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.487274 | orchestrator | 2025-05-23 00:56:13.487281 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-23 00:56:13.487287 | orchestrator | Friday 23 May 2025 00:46:46 +0000 (0:00:00.842) 0:03:31.696 ************ 2025-05-23 00:56:13.487293 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.487299 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.487305 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.487311 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.487317 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.487323 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.487329 | orchestrator | 2025-05-23 00:56:13.487335 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-23 00:56:13.487341 | orchestrator | Friday 23 May 2025 00:46:46 +0000 (0:00:00.779) 0:03:32.475 ************ 2025-05-23 00:56:13.487347 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.487353 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.487359 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.487365 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.487371 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.487377 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.487399 | orchestrator | 2025-05-23 00:56:13.487406 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-23 00:56:13.487412 | orchestrator | Friday 23 May 2025 00:46:47 +0000 (0:00:00.869) 0:03:33.344 ************ 2025-05-23 00:56:13.487418 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.487437 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.487443 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.487449 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.487455 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.487461 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.487467 | orchestrator | 2025-05-23 00:56:13.487474 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-23 00:56:13.487480 | orchestrator | Friday 23 May 2025 00:46:48 +0000 (0:00:00.712) 0:03:34.057 ************ 2025-05-23 00:56:13.487486 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.487492 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.487498 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.487508 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.487514 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.487520 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.487526 | orchestrator | 2025-05-23 00:56:13.487532 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-23 00:56:13.487539 | orchestrator | Friday 23 May 2025 00:46:49 +0000 (0:00:00.919) 0:03:34.977 ************ 2025-05-23 00:56:13.487545 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.487551 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.487557 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.487563 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.487569 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.487575 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.487581 | orchestrator | 2025-05-23 00:56:13.487587 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-23 00:56:13.487645 | orchestrator | Friday 23 May 2025 00:46:50 +0000 (0:00:00.699) 0:03:35.676 ************ 2025-05-23 00:56:13.487657 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.487668 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.487679 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.487690 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.487697 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.487703 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.487709 | orchestrator | 2025-05-23 00:56:13.487715 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-23 00:56:13.487721 | orchestrator | Friday 23 May 2025 00:46:51 +0000 (0:00:01.061) 0:03:36.738 ************ 2025-05-23 00:56:13.487727 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.487733 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.487740 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.487745 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.487751 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.487757 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.487763 | orchestrator | 2025-05-23 00:56:13.487774 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-23 00:56:13.487780 | orchestrator | Friday 23 May 2025 00:46:51 +0000 (0:00:00.685) 0:03:37.423 ************ 2025-05-23 00:56:13.487786 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.487792 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.487798 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.487804 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.487810 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.487816 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.487822 | orchestrator | 2025-05-23 00:56:13.487828 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-23 00:56:13.487834 | orchestrator | Friday 23 May 2025 00:46:54 +0000 (0:00:02.495) 0:03:39.919 ************ 2025-05-23 00:56:13.487841 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.487847 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.487853 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.487859 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.487865 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.487871 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.487877 | orchestrator | 2025-05-23 00:56:13.487883 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-23 00:56:13.487889 | orchestrator | Friday 23 May 2025 00:46:55 +0000 (0:00:00.885) 0:03:40.805 ************ 2025-05-23 00:56:13.487895 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-23 00:56:13.487901 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-23 00:56:13.487907 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.487913 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-23 00:56:13.487924 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-23 00:56:13.487930 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.487936 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-23 00:56:13.487942 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-23 00:56:13.487948 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.487954 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-23 00:56:13.487960 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-23 00:56:13.487966 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.487972 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-23 00:56:13.487978 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-23 00:56:13.487984 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.487990 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-23 00:56:13.487996 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-23 00:56:13.488002 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.488008 | orchestrator | 2025-05-23 00:56:13.488014 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-23 00:56:13.488020 | orchestrator | Friday 23 May 2025 00:46:56 +0000 (0:00:01.176) 0:03:41.981 ************ 2025-05-23 00:56:13.488026 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-23 00:56:13.488033 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-23 00:56:13.488039 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.488045 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-23 00:56:13.488051 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-23 00:56:13.488057 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.488063 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-23 00:56:13.488069 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-23 00:56:13.488075 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.488081 | orchestrator | ok: [testbed-node-3] => (item=osd memory target) 2025-05-23 00:56:13.488087 | orchestrator | ok: [testbed-node-3] => (item=osd_memory_target) 2025-05-23 00:56:13.488093 | orchestrator | ok: [testbed-node-4] => (item=osd memory target) 2025-05-23 00:56:13.488099 | orchestrator | ok: [testbed-node-4] => (item=osd_memory_target) 2025-05-23 00:56:13.488105 | orchestrator | ok: [testbed-node-5] => (item=osd memory target) 2025-05-23 00:56:13.488111 | orchestrator | ok: [testbed-node-5] => (item=osd_memory_target) 2025-05-23 00:56:13.488117 | orchestrator | 2025-05-23 00:56:13.488123 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-23 00:56:13.488129 | orchestrator | Friday 23 May 2025 00:46:57 +0000 (0:00:00.607) 0:03:42.589 ************ 2025-05-23 00:56:13.488135 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.488141 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.488147 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.488153 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.488159 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.488166 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.488172 | orchestrator | 2025-05-23 00:56:13.488178 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-23 00:56:13.488184 | orchestrator | Friday 23 May 2025 00:46:57 +0000 (0:00:00.715) 0:03:43.305 ************ 2025-05-23 00:56:13.488190 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.488249 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.488266 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.488275 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.488285 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.488295 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.488304 | orchestrator | 2025-05-23 00:56:13.488313 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-23 00:56:13.488331 | orchestrator | Friday 23 May 2025 00:46:58 +0000 (0:00:00.582) 0:03:43.887 ************ 2025-05-23 00:56:13.488341 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.488352 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.488362 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.488373 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.488422 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.488431 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.488437 | orchestrator | 2025-05-23 00:56:13.488443 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-23 00:56:13.488465 | orchestrator | Friday 23 May 2025 00:46:59 +0000 (0:00:00.791) 0:03:44.678 ************ 2025-05-23 00:56:13.488472 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.488478 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.488484 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.488490 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.488496 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.488502 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.488508 | orchestrator | 2025-05-23 00:56:13.488514 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-23 00:56:13.488520 | orchestrator | Friday 23 May 2025 00:46:59 +0000 (0:00:00.590) 0:03:45.268 ************ 2025-05-23 00:56:13.488526 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.488532 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.488538 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.488544 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.488550 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.488556 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.488562 | orchestrator | 2025-05-23 00:56:13.488568 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-23 00:56:13.488574 | orchestrator | Friday 23 May 2025 00:47:00 +0000 (0:00:00.787) 0:03:46.056 ************ 2025-05-23 00:56:13.488581 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.488587 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.488593 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.488599 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.488618 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.488625 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.488631 | orchestrator | 2025-05-23 00:56:13.488637 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-23 00:56:13.488643 | orchestrator | Friday 23 May 2025 00:47:01 +0000 (0:00:00.677) 0:03:46.734 ************ 2025-05-23 00:56:13.488649 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-23 00:56:13.488655 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-23 00:56:13.488661 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-23 00:56:13.488667 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.488674 | orchestrator | 2025-05-23 00:56:13.488680 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-23 00:56:13.488686 | orchestrator | Friday 23 May 2025 00:47:01 +0000 (0:00:00.517) 0:03:47.251 ************ 2025-05-23 00:56:13.488692 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-23 00:56:13.488698 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-23 00:56:13.488704 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-23 00:56:13.488710 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.488716 | orchestrator | 2025-05-23 00:56:13.488722 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-23 00:56:13.488728 | orchestrator | Friday 23 May 2025 00:47:02 +0000 (0:00:00.513) 0:03:47.764 ************ 2025-05-23 00:56:13.488734 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-23 00:56:13.488740 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-23 00:56:13.488753 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-23 00:56:13.488759 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.488765 | orchestrator | 2025-05-23 00:56:13.488771 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-23 00:56:13.488777 | orchestrator | Friday 23 May 2025 00:47:02 +0000 (0:00:00.496) 0:03:48.260 ************ 2025-05-23 00:56:13.488783 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.488789 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.488795 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.488801 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.488806 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.488811 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.488817 | orchestrator | 2025-05-23 00:56:13.488822 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-23 00:56:13.488827 | orchestrator | Friday 23 May 2025 00:47:03 +0000 (0:00:00.581) 0:03:48.842 ************ 2025-05-23 00:56:13.488832 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-23 00:56:13.488838 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.488843 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-23 00:56:13.488848 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.488853 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-23 00:56:13.488859 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.488864 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-23 00:56:13.488869 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-23 00:56:13.488875 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-23 00:56:13.488881 | orchestrator | 2025-05-23 00:56:13.488887 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-23 00:56:13.488894 | orchestrator | Friday 23 May 2025 00:47:04 +0000 (0:00:01.047) 0:03:49.889 ************ 2025-05-23 00:56:13.488900 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.488959 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.488967 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.488973 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.488979 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.488985 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.488991 | orchestrator | 2025-05-23 00:56:13.488997 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-23 00:56:13.489003 | orchestrator | Friday 23 May 2025 00:47:05 +0000 (0:00:00.704) 0:03:50.594 ************ 2025-05-23 00:56:13.489009 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.489015 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.489021 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.489027 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.489033 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.489039 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.489045 | orchestrator | 2025-05-23 00:56:13.489051 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-23 00:56:13.489057 | orchestrator | Friday 23 May 2025 00:47:06 +0000 (0:00:01.186) 0:03:51.781 ************ 2025-05-23 00:56:13.489067 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-23 00:56:13.489073 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.489079 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-23 00:56:13.489086 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.489091 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-23 00:56:13.489098 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.489104 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-23 00:56:13.489110 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.489115 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-23 00:56:13.489122 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.489127 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-23 00:56:13.489134 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.489143 | orchestrator | 2025-05-23 00:56:13.489149 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-23 00:56:13.489155 | orchestrator | Friday 23 May 2025 00:47:07 +0000 (0:00:01.181) 0:03:52.962 ************ 2025-05-23 00:56:13.489162 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.489168 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.489174 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.489180 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-23 00:56:13.489186 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.489192 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-23 00:56:13.489198 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.489205 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-23 00:56:13.489211 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.489217 | orchestrator | 2025-05-23 00:56:13.489223 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-23 00:56:13.489229 | orchestrator | Friday 23 May 2025 00:47:08 +0000 (0:00:00.858) 0:03:53.820 ************ 2025-05-23 00:56:13.489235 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-23 00:56:13.489241 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-23 00:56:13.489246 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-23 00:56:13.489251 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-23 00:56:13.489257 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-23 00:56:13.489262 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-23 00:56:13.489267 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.489273 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-23 00:56:13.489278 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-23 00:56:13.489283 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-23 00:56:13.489288 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.489294 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.489299 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-23 00:56:13.489304 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-23 00:56:13.489309 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-23 00:56:13.489315 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.489320 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-23 00:56:13.489325 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-23 00:56:13.489330 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-23 00:56:13.489336 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-23 00:56:13.489341 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-23 00:56:13.489346 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.489351 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-23 00:56:13.489357 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.489362 | orchestrator | 2025-05-23 00:56:13.489367 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-23 00:56:13.489373 | orchestrator | Friday 23 May 2025 00:47:09 +0000 (0:00:01.447) 0:03:55.268 ************ 2025-05-23 00:56:13.489378 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:56:13.489396 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:56:13.489402 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:56:13.489407 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:56:13.489413 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:56:13.489421 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:56:13.489427 | orchestrator | 2025-05-23 00:56:13.489470 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-23 00:56:13.489478 | orchestrator | Friday 23 May 2025 00:47:14 +0000 (0:00:04.442) 0:03:59.711 ************ 2025-05-23 00:56:13.489483 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:56:13.489489 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:56:13.489494 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:56:13.489499 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:56:13.489504 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:56:13.489509 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:56:13.489515 | orchestrator | 2025-05-23 00:56:13.489520 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-05-23 00:56:13.489525 | orchestrator | Friday 23 May 2025 00:47:15 +0000 (0:00:01.049) 0:04:00.760 ************ 2025-05-23 00:56:13.489531 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.489536 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.489541 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.489546 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:56:13.489552 | orchestrator | 2025-05-23 00:56:13.489560 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called before restart] ******** 2025-05-23 00:56:13.489565 | orchestrator | Friday 23 May 2025 00:47:16 +0000 (0:00:01.053) 0:04:01.814 ************ 2025-05-23 00:56:13.489571 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.489576 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.489581 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.489587 | orchestrator | 2025-05-23 00:56:13.489592 | orchestrator | TASK [ceph-handler : set _mon_handler_called before restart] ******************* 2025-05-23 00:56:13.489598 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 00:56:13.489603 | orchestrator | 2025-05-23 00:56:13.489608 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-05-23 00:56:13.489614 | orchestrator | Friday 23 May 2025 00:47:17 +0000 (0:00:01.224) 0:04:03.038 ************ 2025-05-23 00:56:13.489619 | orchestrator | 2025-05-23 00:56:13.489624 | orchestrator | TASK [ceph-handler : copy mon restart script] ********************************** 2025-05-23 00:56:13.489630 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-23 00:56:13.489635 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-23 00:56:13.489640 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-23 00:56:13.489645 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.489651 | orchestrator | 2025-05-23 00:56:13.489657 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-05-23 00:56:13.489666 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:56:13.489675 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:56:13.489683 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:56:13.489691 | orchestrator | 2025-05-23 00:56:13.489711 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mon daemon(s)] ******************** 2025-05-23 00:56:13.489719 | orchestrator | Friday 23 May 2025 00:47:18 +0000 (0:00:01.354) 0:04:04.392 ************ 2025-05-23 00:56:13.489727 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-23 00:56:13.489736 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-23 00:56:13.489744 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-23 00:56:13.489752 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.489761 | orchestrator | 2025-05-23 00:56:13.489769 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called after restart] ********* 2025-05-23 00:56:13.489778 | orchestrator | Friday 23 May 2025 00:47:19 +0000 (0:00:01.015) 0:04:05.408 ************ 2025-05-23 00:56:13.489786 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.489792 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.489806 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.489811 | orchestrator | 2025-05-23 00:56:13.489816 | orchestrator | TASK [ceph-handler : set _mon_handler_called after restart] ******************** 2025-05-23 00:56:13.489822 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.489827 | orchestrator | 2025-05-23 00:56:13.489832 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-05-23 00:56:13.489837 | orchestrator | Friday 23 May 2025 00:47:20 +0000 (0:00:00.898) 0:04:06.307 ************ 2025-05-23 00:56:13.489843 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.489848 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.489853 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.489871 | orchestrator | 2025-05-23 00:56:13.489876 | orchestrator | TASK [ceph-handler : osds handler] ********************************************* 2025-05-23 00:56:13.489882 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.489887 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.489892 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.489898 | orchestrator | 2025-05-23 00:56:13.489903 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-05-23 00:56:13.489908 | orchestrator | Friday 23 May 2025 00:47:21 +0000 (0:00:00.989) 0:04:07.296 ************ 2025-05-23 00:56:13.489914 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.489919 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.489924 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.489930 | orchestrator | 2025-05-23 00:56:13.489935 | orchestrator | TASK [ceph-handler : mdss handler] ********************************************* 2025-05-23 00:56:13.489940 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.489946 | orchestrator | 2025-05-23 00:56:13.489951 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-05-23 00:56:13.489956 | orchestrator | Friday 23 May 2025 00:47:22 +0000 (0:00:01.156) 0:04:08.453 ************ 2025-05-23 00:56:13.489962 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.489967 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.489972 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.490047 | orchestrator | 2025-05-23 00:56:13.490057 | orchestrator | TASK [ceph-handler : rgws handler] ********************************************* 2025-05-23 00:56:13.490062 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.490067 | orchestrator | 2025-05-23 00:56:13.490073 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-05-23 00:56:13.490078 | orchestrator | Friday 23 May 2025 00:47:23 +0000 (0:00:00.879) 0:04:09.332 ************ 2025-05-23 00:56:13.490083 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.490089 | orchestrator | 2025-05-23 00:56:13.490153 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-05-23 00:56:13.490166 | orchestrator | Friday 23 May 2025 00:47:23 +0000 (0:00:00.137) 0:04:09.470 ************ 2025-05-23 00:56:13.490176 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.490190 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.490201 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.490211 | orchestrator | 2025-05-23 00:56:13.490220 | orchestrator | TASK [ceph-handler : rbdmirrors handler] *************************************** 2025-05-23 00:56:13.490230 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.490238 | orchestrator | 2025-05-23 00:56:13.490243 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-05-23 00:56:13.490249 | orchestrator | Friday 23 May 2025 00:47:25 +0000 (0:00:01.301) 0:04:10.771 ************ 2025-05-23 00:56:13.490254 | orchestrator | 2025-05-23 00:56:13.490260 | orchestrator | TASK [ceph-handler : mgrs handler] ********************************************* 2025-05-23 00:56:13.490265 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.490275 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:56:13.490281 | orchestrator | 2025-05-23 00:56:13.490287 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-05-23 00:56:13.490298 | orchestrator | Friday 23 May 2025 00:47:26 +0000 (0:00:00.838) 0:04:11.610 ************ 2025-05-23 00:56:13.490303 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.490308 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.490316 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.490326 | orchestrator | 2025-05-23 00:56:13.490332 | orchestrator | TASK [ceph-handler : set _mgr_handler_called before restart] ******************* 2025-05-23 00:56:13.490337 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-23 00:56:13.490343 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-23 00:56:13.490348 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-23 00:56:13.490353 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.490358 | orchestrator | 2025-05-23 00:56:13.490364 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-05-23 00:56:13.490369 | orchestrator | Friday 23 May 2025 00:47:27 +0000 (0:00:01.173) 0:04:12.783 ************ 2025-05-23 00:56:13.490374 | orchestrator | 2025-05-23 00:56:13.490380 | orchestrator | TASK [ceph-handler : copy mgr restart script] ********************************** 2025-05-23 00:56:13.490403 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.490408 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.490413 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.490419 | orchestrator | 2025-05-23 00:56:13.490424 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-05-23 00:56:13.490429 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:56:13.490435 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:56:13.490440 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:56:13.490445 | orchestrator | 2025-05-23 00:56:13.490451 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-05-23 00:56:13.490456 | orchestrator | Friday 23 May 2025 00:47:28 +0000 (0:00:01.408) 0:04:14.192 ************ 2025-05-23 00:56:13.490461 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-23 00:56:13.490467 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-23 00:56:13.490472 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-23 00:56:13.490477 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.490483 | orchestrator | 2025-05-23 00:56:13.490488 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-05-23 00:56:13.490493 | orchestrator | Friday 23 May 2025 00:47:29 +0000 (0:00:00.917) 0:04:15.109 ************ 2025-05-23 00:56:13.490499 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.490504 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.490509 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.490515 | orchestrator | 2025-05-23 00:56:13.490520 | orchestrator | TASK [ceph-handler : set _mgr_handler_called after restart] ******************** 2025-05-23 00:56:13.490525 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.490531 | orchestrator | 2025-05-23 00:56:13.490536 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-05-23 00:56:13.490541 | orchestrator | Friday 23 May 2025 00:47:31 +0000 (0:00:01.941) 0:04:17.052 ************ 2025-05-23 00:56:13.490547 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 00:56:13.490552 | orchestrator | 2025-05-23 00:56:13.490557 | orchestrator | RUNNING HANDLER [ceph-handler : rbd-target-api and rbd-target-gw handler] ****** 2025-05-23 00:56:13.490563 | orchestrator | Friday 23 May 2025 00:47:32 +0000 (0:00:01.041) 0:04:18.093 ************ 2025-05-23 00:56:13.490568 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.490573 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.490579 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.490584 | orchestrator | 2025-05-23 00:56:13.490589 | orchestrator | TASK [ceph-handler : rbd-target-api and rbd-target-gw handler] ***************** 2025-05-23 00:56:13.490595 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.490600 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.490606 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.490615 | orchestrator | 2025-05-23 00:56:13.490620 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-05-23 00:56:13.490626 | orchestrator | Friday 23 May 2025 00:47:33 +0000 (0:00:01.225) 0:04:19.318 ************ 2025-05-23 00:56:13.490631 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:56:13.490636 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:56:13.490642 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:56:13.490647 | orchestrator | 2025-05-23 00:56:13.490653 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-23 00:56:13.490662 | orchestrator | Friday 23 May 2025 00:47:35 +0000 (0:00:01.187) 0:04:20.506 ************ 2025-05-23 00:56:13.490676 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:56:13.490686 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:56:13.490696 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:56:13.490706 | orchestrator | 2025-05-23 00:56:13.490712 | orchestrator | TASK [ceph-handler : remove tempdir for scripts] ******************************* 2025-05-23 00:56:13.490767 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-23 00:56:13.490778 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-23 00:56:13.490792 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-23 00:56:13.490802 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.490812 | orchestrator | 2025-05-23 00:56:13.490820 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-05-23 00:56:13.490825 | orchestrator | Friday 23 May 2025 00:47:36 +0000 (0:00:01.645) 0:04:22.152 ************ 2025-05-23 00:56:13.490831 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.490836 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.490842 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.490847 | orchestrator | 2025-05-23 00:56:13.490853 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-05-23 00:56:13.490860 | orchestrator | Friday 23 May 2025 00:47:37 +0000 (0:00:00.685) 0:04:22.837 ************ 2025-05-23 00:56:13.490872 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 00:56:13.490883 | orchestrator | 2025-05-23 00:56:13.490897 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-05-23 00:56:13.490905 | orchestrator | Friday 23 May 2025 00:47:37 +0000 (0:00:00.463) 0:04:23.301 ************ 2025-05-23 00:56:13.490913 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.490921 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.490929 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.490938 | orchestrator | 2025-05-23 00:56:13.490947 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-05-23 00:56:13.490956 | orchestrator | Friday 23 May 2025 00:47:38 +0000 (0:00:00.436) 0:04:23.737 ************ 2025-05-23 00:56:13.490977 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:56:13.490983 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:56:13.490988 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:56:13.490994 | orchestrator | 2025-05-23 00:56:13.490999 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-05-23 00:56:13.491004 | orchestrator | Friday 23 May 2025 00:47:39 +0000 (0:00:01.100) 0:04:24.838 ************ 2025-05-23 00:56:13.491010 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-23 00:56:13.491015 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-23 00:56:13.491021 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-23 00:56:13.491026 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.491031 | orchestrator | 2025-05-23 00:56:13.491037 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-05-23 00:56:13.491042 | orchestrator | Friday 23 May 2025 00:47:39 +0000 (0:00:00.528) 0:04:25.367 ************ 2025-05-23 00:56:13.491047 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.491053 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.491064 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.491069 | orchestrator | 2025-05-23 00:56:13.491075 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-05-23 00:56:13.491080 | orchestrator | Friday 23 May 2025 00:47:40 +0000 (0:00:00.285) 0:04:25.652 ************ 2025-05-23 00:56:13.491099 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.491105 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.491110 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.491115 | orchestrator | 2025-05-23 00:56:13.491121 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-05-23 00:56:13.491126 | orchestrator | Friday 23 May 2025 00:47:40 +0000 (0:00:00.255) 0:04:25.908 ************ 2025-05-23 00:56:13.491131 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.491137 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.491142 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.491147 | orchestrator | 2025-05-23 00:56:13.491153 | orchestrator | RUNNING HANDLER [ceph-handler : rbd-target-api and rbd-target-gw handler] ****** 2025-05-23 00:56:13.491158 | orchestrator | Friday 23 May 2025 00:47:40 +0000 (0:00:00.378) 0:04:26.287 ************ 2025-05-23 00:56:13.491163 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.491168 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.491174 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.491179 | orchestrator | 2025-05-23 00:56:13.491184 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-23 00:56:13.491190 | orchestrator | Friday 23 May 2025 00:47:41 +0000 (0:00:00.267) 0:04:26.554 ************ 2025-05-23 00:56:13.491195 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:56:13.491200 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:56:13.491205 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:56:13.491211 | orchestrator | 2025-05-23 00:56:13.491216 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-05-23 00:56:13.491221 | orchestrator | 2025-05-23 00:56:13.491227 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-23 00:56:13.491232 | orchestrator | Friday 23 May 2025 00:47:42 +0000 (0:00:01.932) 0:04:28.486 ************ 2025-05-23 00:56:13.491238 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:56:13.491243 | orchestrator | 2025-05-23 00:56:13.491249 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-23 00:56:13.491254 | orchestrator | Friday 23 May 2025 00:47:43 +0000 (0:00:00.959) 0:04:29.446 ************ 2025-05-23 00:56:13.491259 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.491265 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.491270 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.491275 | orchestrator | 2025-05-23 00:56:13.491281 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-23 00:56:13.491286 | orchestrator | Friday 23 May 2025 00:47:44 +0000 (0:00:00.785) 0:04:30.232 ************ 2025-05-23 00:56:13.491291 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.491297 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.491302 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.491307 | orchestrator | 2025-05-23 00:56:13.491312 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-23 00:56:13.491366 | orchestrator | Friday 23 May 2025 00:47:45 +0000 (0:00:00.463) 0:04:30.695 ************ 2025-05-23 00:56:13.491378 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.491403 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.491416 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.491424 | orchestrator | 2025-05-23 00:56:13.491434 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-23 00:56:13.491441 | orchestrator | Friday 23 May 2025 00:47:45 +0000 (0:00:00.716) 0:04:31.412 ************ 2025-05-23 00:56:13.491446 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.491451 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.491462 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.491468 | orchestrator | 2025-05-23 00:56:13.491473 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-23 00:56:13.491478 | orchestrator | Friday 23 May 2025 00:47:46 +0000 (0:00:00.339) 0:04:31.751 ************ 2025-05-23 00:56:13.491484 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.491489 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.491494 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.491499 | orchestrator | 2025-05-23 00:56:13.491505 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-23 00:56:13.491514 | orchestrator | Friday 23 May 2025 00:47:46 +0000 (0:00:00.736) 0:04:32.488 ************ 2025-05-23 00:56:13.491519 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.491525 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.491530 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.491535 | orchestrator | 2025-05-23 00:56:13.491541 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-23 00:56:13.491546 | orchestrator | Friday 23 May 2025 00:47:47 +0000 (0:00:00.391) 0:04:32.880 ************ 2025-05-23 00:56:13.491551 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.491557 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.491562 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.491567 | orchestrator | 2025-05-23 00:56:13.491573 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-23 00:56:13.491578 | orchestrator | Friday 23 May 2025 00:47:48 +0000 (0:00:00.733) 0:04:33.613 ************ 2025-05-23 00:56:13.491583 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.491588 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.491596 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.491606 | orchestrator | 2025-05-23 00:56:13.491611 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-23 00:56:13.491616 | orchestrator | Friday 23 May 2025 00:47:48 +0000 (0:00:00.388) 0:04:34.001 ************ 2025-05-23 00:56:13.491622 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.491627 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.491632 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.491637 | orchestrator | 2025-05-23 00:56:13.491643 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-23 00:56:13.491687 | orchestrator | Friday 23 May 2025 00:47:48 +0000 (0:00:00.398) 0:04:34.400 ************ 2025-05-23 00:56:13.491694 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.491699 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.491704 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.491710 | orchestrator | 2025-05-23 00:56:13.491715 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-23 00:56:13.491720 | orchestrator | Friday 23 May 2025 00:47:49 +0000 (0:00:00.383) 0:04:34.783 ************ 2025-05-23 00:56:13.491726 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.491731 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.491736 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.491742 | orchestrator | 2025-05-23 00:56:13.491747 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-23 00:56:13.491752 | orchestrator | Friday 23 May 2025 00:47:50 +0000 (0:00:01.242) 0:04:36.026 ************ 2025-05-23 00:56:13.491758 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.491763 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.491768 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.491773 | orchestrator | 2025-05-23 00:56:13.491779 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-23 00:56:13.491784 | orchestrator | Friday 23 May 2025 00:47:50 +0000 (0:00:00.449) 0:04:36.476 ************ 2025-05-23 00:56:13.491789 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.491794 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.491800 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.491812 | orchestrator | 2025-05-23 00:56:13.491817 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-23 00:56:13.491823 | orchestrator | Friday 23 May 2025 00:47:51 +0000 (0:00:00.536) 0:04:37.012 ************ 2025-05-23 00:56:13.491828 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.491833 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.491839 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.491844 | orchestrator | 2025-05-23 00:56:13.491849 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-23 00:56:13.491855 | orchestrator | Friday 23 May 2025 00:47:52 +0000 (0:00:00.519) 0:04:37.531 ************ 2025-05-23 00:56:13.491860 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.491865 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.491870 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.491876 | orchestrator | 2025-05-23 00:56:13.491881 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-23 00:56:13.491886 | orchestrator | Friday 23 May 2025 00:47:52 +0000 (0:00:00.299) 0:04:37.830 ************ 2025-05-23 00:56:13.491892 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.491897 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.491902 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.491908 | orchestrator | 2025-05-23 00:56:13.491913 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-23 00:56:13.491918 | orchestrator | Friday 23 May 2025 00:47:52 +0000 (0:00:00.321) 0:04:38.152 ************ 2025-05-23 00:56:13.491924 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.491929 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.491934 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.491939 | orchestrator | 2025-05-23 00:56:13.491945 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-23 00:56:13.491999 | orchestrator | Friday 23 May 2025 00:47:53 +0000 (0:00:00.359) 0:04:38.511 ************ 2025-05-23 00:56:13.492006 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.492015 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.492024 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.492033 | orchestrator | 2025-05-23 00:56:13.492044 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-23 00:56:13.492051 | orchestrator | Friday 23 May 2025 00:47:53 +0000 (0:00:00.367) 0:04:38.878 ************ 2025-05-23 00:56:13.492057 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.492062 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.492067 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.492072 | orchestrator | 2025-05-23 00:56:13.492078 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-23 00:56:13.492083 | orchestrator | Friday 23 May 2025 00:47:53 +0000 (0:00:00.539) 0:04:39.418 ************ 2025-05-23 00:56:13.492089 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.492094 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.492099 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.492104 | orchestrator | 2025-05-23 00:56:13.492110 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-23 00:56:13.492119 | orchestrator | Friday 23 May 2025 00:47:54 +0000 (0:00:00.343) 0:04:39.762 ************ 2025-05-23 00:56:13.492124 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.492129 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.492135 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.492140 | orchestrator | 2025-05-23 00:56:13.492145 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-23 00:56:13.492151 | orchestrator | Friday 23 May 2025 00:47:54 +0000 (0:00:00.288) 0:04:40.051 ************ 2025-05-23 00:56:13.492156 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.492161 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.492167 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.492172 | orchestrator | 2025-05-23 00:56:13.492177 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-23 00:56:13.492188 | orchestrator | Friday 23 May 2025 00:47:54 +0000 (0:00:00.271) 0:04:40.322 ************ 2025-05-23 00:56:13.492203 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.492208 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.492214 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.492219 | orchestrator | 2025-05-23 00:56:13.492224 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-23 00:56:13.492230 | orchestrator | Friday 23 May 2025 00:47:55 +0000 (0:00:00.468) 0:04:40.790 ************ 2025-05-23 00:56:13.492235 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.492240 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.492245 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.492251 | orchestrator | 2025-05-23 00:56:13.492256 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-23 00:56:13.492261 | orchestrator | Friday 23 May 2025 00:47:55 +0000 (0:00:00.288) 0:04:41.079 ************ 2025-05-23 00:56:13.492267 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.492272 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.492277 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.492283 | orchestrator | 2025-05-23 00:56:13.492288 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-23 00:56:13.492293 | orchestrator | Friday 23 May 2025 00:47:55 +0000 (0:00:00.239) 0:04:41.318 ************ 2025-05-23 00:56:13.492299 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.492304 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.492309 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.492315 | orchestrator | 2025-05-23 00:56:13.492332 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-23 00:56:13.492338 | orchestrator | Friday 23 May 2025 00:47:56 +0000 (0:00:00.246) 0:04:41.565 ************ 2025-05-23 00:56:13.492344 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.492349 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.492354 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.492360 | orchestrator | 2025-05-23 00:56:13.492365 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-23 00:56:13.492371 | orchestrator | Friday 23 May 2025 00:47:56 +0000 (0:00:00.486) 0:04:42.051 ************ 2025-05-23 00:56:13.492376 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.492381 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.492486 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.492492 | orchestrator | 2025-05-23 00:56:13.492497 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-23 00:56:13.492502 | orchestrator | Friday 23 May 2025 00:47:56 +0000 (0:00:00.312) 0:04:42.364 ************ 2025-05-23 00:56:13.492507 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.492512 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.492516 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.492521 | orchestrator | 2025-05-23 00:56:13.492526 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-23 00:56:13.492531 | orchestrator | Friday 23 May 2025 00:47:57 +0000 (0:00:00.302) 0:04:42.666 ************ 2025-05-23 00:56:13.492535 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.492540 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.492545 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.492549 | orchestrator | 2025-05-23 00:56:13.492554 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-23 00:56:13.492559 | orchestrator | Friday 23 May 2025 00:47:57 +0000 (0:00:00.510) 0:04:43.177 ************ 2025-05-23 00:56:13.492564 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.492568 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.492573 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.492584 | orchestrator | 2025-05-23 00:56:13.492589 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-23 00:56:13.492594 | orchestrator | Friday 23 May 2025 00:47:57 +0000 (0:00:00.306) 0:04:43.483 ************ 2025-05-23 00:56:13.492598 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.492603 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.492608 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.492613 | orchestrator | 2025-05-23 00:56:13.492680 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-23 00:56:13.492688 | orchestrator | Friday 23 May 2025 00:47:58 +0000 (0:00:00.309) 0:04:43.792 ************ 2025-05-23 00:56:13.492693 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-23 00:56:13.492706 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-23 00:56:13.492711 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.492717 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-23 00:56:13.492722 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-23 00:56:13.492727 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.492733 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-23 00:56:13.492739 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-23 00:56:13.492744 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.492749 | orchestrator | 2025-05-23 00:56:13.492754 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-23 00:56:13.492759 | orchestrator | Friday 23 May 2025 00:47:58 +0000 (0:00:00.350) 0:04:44.142 ************ 2025-05-23 00:56:13.492769 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-23 00:56:13.492775 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-23 00:56:13.492780 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.492785 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-23 00:56:13.492791 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-23 00:56:13.492796 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.492802 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-23 00:56:13.492807 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-23 00:56:13.492812 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.492817 | orchestrator | 2025-05-23 00:56:13.492822 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-23 00:56:13.492827 | orchestrator | Friday 23 May 2025 00:47:59 +0000 (0:00:00.463) 0:04:44.606 ************ 2025-05-23 00:56:13.492831 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.492836 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.492841 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.492845 | orchestrator | 2025-05-23 00:56:13.492850 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-23 00:56:13.492855 | orchestrator | Friday 23 May 2025 00:47:59 +0000 (0:00:00.278) 0:04:44.884 ************ 2025-05-23 00:56:13.492859 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.492864 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.492869 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.492873 | orchestrator | 2025-05-23 00:56:13.492878 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-23 00:56:13.492883 | orchestrator | Friday 23 May 2025 00:47:59 +0000 (0:00:00.286) 0:04:45.170 ************ 2025-05-23 00:56:13.492888 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.492893 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.492897 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.492902 | orchestrator | 2025-05-23 00:56:13.492907 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-23 00:56:13.492911 | orchestrator | Friday 23 May 2025 00:47:59 +0000 (0:00:00.305) 0:04:45.475 ************ 2025-05-23 00:56:13.492916 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.492924 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.492928 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.492933 | orchestrator | 2025-05-23 00:56:13.492938 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-23 00:56:13.492942 | orchestrator | Friday 23 May 2025 00:48:00 +0000 (0:00:00.436) 0:04:45.912 ************ 2025-05-23 00:56:13.492947 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.492952 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.492957 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.492961 | orchestrator | 2025-05-23 00:56:13.492966 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-23 00:56:13.492971 | orchestrator | Friday 23 May 2025 00:48:00 +0000 (0:00:00.289) 0:04:46.201 ************ 2025-05-23 00:56:13.492975 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.492980 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.492985 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.492989 | orchestrator | 2025-05-23 00:56:13.492994 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-23 00:56:13.492999 | orchestrator | Friday 23 May 2025 00:48:01 +0000 (0:00:00.313) 0:04:46.514 ************ 2025-05-23 00:56:13.493004 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-23 00:56:13.493008 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-23 00:56:13.493013 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-23 00:56:13.493018 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.493023 | orchestrator | 2025-05-23 00:56:13.493027 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-23 00:56:13.493032 | orchestrator | Friday 23 May 2025 00:48:01 +0000 (0:00:00.381) 0:04:46.895 ************ 2025-05-23 00:56:13.493037 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-23 00:56:13.493042 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-23 00:56:13.493046 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-23 00:56:13.493051 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.493056 | orchestrator | 2025-05-23 00:56:13.493060 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-23 00:56:13.493065 | orchestrator | Friday 23 May 2025 00:48:01 +0000 (0:00:00.379) 0:04:47.275 ************ 2025-05-23 00:56:13.493070 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-23 00:56:13.493075 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-23 00:56:13.493079 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-23 00:56:13.493119 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.493126 | orchestrator | 2025-05-23 00:56:13.493131 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-23 00:56:13.493135 | orchestrator | Friday 23 May 2025 00:48:02 +0000 (0:00:00.384) 0:04:47.660 ************ 2025-05-23 00:56:13.493140 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.493145 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.493150 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.493154 | orchestrator | 2025-05-23 00:56:13.493159 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-23 00:56:13.493164 | orchestrator | Friday 23 May 2025 00:48:02 +0000 (0:00:00.525) 0:04:48.186 ************ 2025-05-23 00:56:13.493169 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-23 00:56:13.493173 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.493178 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-23 00:56:13.493183 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.493187 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-23 00:56:13.493192 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.493197 | orchestrator | 2025-05-23 00:56:13.493205 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-23 00:56:13.493215 | orchestrator | Friday 23 May 2025 00:48:03 +0000 (0:00:00.549) 0:04:48.736 ************ 2025-05-23 00:56:13.493220 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.493225 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.493230 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.493234 | orchestrator | 2025-05-23 00:56:13.493239 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-23 00:56:13.493244 | orchestrator | Friday 23 May 2025 00:48:03 +0000 (0:00:00.454) 0:04:49.190 ************ 2025-05-23 00:56:13.493249 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.493253 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.493258 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.493262 | orchestrator | 2025-05-23 00:56:13.493267 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-23 00:56:13.493272 | orchestrator | Friday 23 May 2025 00:48:04 +0000 (0:00:00.332) 0:04:49.523 ************ 2025-05-23 00:56:13.493277 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-23 00:56:13.493281 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-23 00:56:13.493286 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.493291 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.493295 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-23 00:56:13.493300 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.493305 | orchestrator | 2025-05-23 00:56:13.493311 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-23 00:56:13.493320 | orchestrator | Friday 23 May 2025 00:48:04 +0000 (0:00:00.737) 0:04:50.260 ************ 2025-05-23 00:56:13.493325 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.493330 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.493335 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.493339 | orchestrator | 2025-05-23 00:56:13.493344 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-23 00:56:13.493349 | orchestrator | Friday 23 May 2025 00:48:05 +0000 (0:00:00.359) 0:04:50.619 ************ 2025-05-23 00:56:13.493353 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-23 00:56:13.493358 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-23 00:56:13.493363 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-23 00:56:13.493367 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.493372 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-23 00:56:13.493377 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-23 00:56:13.493381 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-23 00:56:13.493405 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.493413 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-23 00:56:13.493419 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-23 00:56:13.493423 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-23 00:56:13.493428 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.493433 | orchestrator | 2025-05-23 00:56:13.493438 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-23 00:56:13.493442 | orchestrator | Friday 23 May 2025 00:48:05 +0000 (0:00:00.633) 0:04:51.253 ************ 2025-05-23 00:56:13.493447 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.493452 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.493457 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.493461 | orchestrator | 2025-05-23 00:56:13.493478 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-23 00:56:13.493484 | orchestrator | Friday 23 May 2025 00:48:06 +0000 (0:00:00.734) 0:04:51.988 ************ 2025-05-23 00:56:13.493488 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.493493 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.493502 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.493507 | orchestrator | 2025-05-23 00:56:13.493511 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-23 00:56:13.493516 | orchestrator | Friday 23 May 2025 00:48:06 +0000 (0:00:00.499) 0:04:52.488 ************ 2025-05-23 00:56:13.493521 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.493525 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.493530 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.493535 | orchestrator | 2025-05-23 00:56:13.493539 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-23 00:56:13.493544 | orchestrator | Friday 23 May 2025 00:48:07 +0000 (0:00:00.632) 0:04:53.121 ************ 2025-05-23 00:56:13.493549 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.493554 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.493558 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.493563 | orchestrator | 2025-05-23 00:56:13.493568 | orchestrator | TASK [ceph-mon : set_fact container_exec_cmd] ********************************** 2025-05-23 00:56:13.493590 | orchestrator | Friday 23 May 2025 00:48:08 +0000 (0:00:00.522) 0:04:53.643 ************ 2025-05-23 00:56:13.493596 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.493600 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.493605 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.493610 | orchestrator | 2025-05-23 00:56:13.493615 | orchestrator | TASK [ceph-mon : include deploy_monitors.yml] ********************************** 2025-05-23 00:56:13.493619 | orchestrator | Friday 23 May 2025 00:48:08 +0000 (0:00:00.531) 0:04:54.175 ************ 2025-05-23 00:56:13.493624 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:56:13.493629 | orchestrator | 2025-05-23 00:56:13.493633 | orchestrator | TASK [ceph-mon : check if monitor initial keyring already exists] ************** 2025-05-23 00:56:13.493638 | orchestrator | Friday 23 May 2025 00:48:09 +0000 (0:00:00.518) 0:04:54.693 ************ 2025-05-23 00:56:13.493643 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.493648 | orchestrator | 2025-05-23 00:56:13.493652 | orchestrator | TASK [ceph-mon : generate monitor initial keyring] ***************************** 2025-05-23 00:56:13.493660 | orchestrator | Friday 23 May 2025 00:48:09 +0000 (0:00:00.136) 0:04:54.830 ************ 2025-05-23 00:56:13.493665 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-23 00:56:13.493670 | orchestrator | 2025-05-23 00:56:13.493675 | orchestrator | TASK [ceph-mon : set_fact _initial_mon_key_success] **************************** 2025-05-23 00:56:13.493679 | orchestrator | Friday 23 May 2025 00:48:09 +0000 (0:00:00.589) 0:04:55.420 ************ 2025-05-23 00:56:13.493684 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.493689 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.493693 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.493698 | orchestrator | 2025-05-23 00:56:13.493703 | orchestrator | TASK [ceph-mon : get initial keyring when it already exists] ******************* 2025-05-23 00:56:13.493707 | orchestrator | Friday 23 May 2025 00:48:10 +0000 (0:00:00.505) 0:04:55.925 ************ 2025-05-23 00:56:13.493712 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.493717 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.493721 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.493726 | orchestrator | 2025-05-23 00:56:13.493731 | orchestrator | TASK [ceph-mon : create monitor initial keyring] ******************************* 2025-05-23 00:56:13.493736 | orchestrator | Friday 23 May 2025 00:48:10 +0000 (0:00:00.309) 0:04:56.234 ************ 2025-05-23 00:56:13.493740 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:56:13.493745 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:56:13.493750 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:56:13.493754 | orchestrator | 2025-05-23 00:56:13.493760 | orchestrator | TASK [ceph-mon : copy the initial key in /etc/ceph (for containers)] *********** 2025-05-23 00:56:13.493766 | orchestrator | Friday 23 May 2025 00:48:11 +0000 (0:00:01.181) 0:04:57.416 ************ 2025-05-23 00:56:13.493771 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:56:13.493776 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:56:13.493785 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:56:13.493790 | orchestrator | 2025-05-23 00:56:13.493796 | orchestrator | TASK [ceph-mon : create monitor directory] ************************************* 2025-05-23 00:56:13.493801 | orchestrator | Friday 23 May 2025 00:48:12 +0000 (0:00:00.737) 0:04:58.153 ************ 2025-05-23 00:56:13.493806 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:56:13.493811 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:56:13.493817 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:56:13.493822 | orchestrator | 2025-05-23 00:56:13.493827 | orchestrator | TASK [ceph-mon : recursively fix ownership of monitor directory] *************** 2025-05-23 00:56:13.493833 | orchestrator | Friday 23 May 2025 00:48:13 +0000 (0:00:00.870) 0:04:59.023 ************ 2025-05-23 00:56:13.493838 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.493843 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.493849 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.493854 | orchestrator | 2025-05-23 00:56:13.493859 | orchestrator | TASK [ceph-mon : create custom admin keyring] ********************************** 2025-05-23 00:56:13.493865 | orchestrator | Friday 23 May 2025 00:48:14 +0000 (0:00:00.737) 0:04:59.761 ************ 2025-05-23 00:56:13.493870 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.493876 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.493881 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.493886 | orchestrator | 2025-05-23 00:56:13.493892 | orchestrator | TASK [ceph-mon : set_fact ceph-authtool container command] ********************* 2025-05-23 00:56:13.493897 | orchestrator | Friday 23 May 2025 00:48:14 +0000 (0:00:00.352) 0:05:00.114 ************ 2025-05-23 00:56:13.493902 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.493908 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.493913 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.493918 | orchestrator | 2025-05-23 00:56:13.493924 | orchestrator | TASK [ceph-mon : import admin keyring into mon keyring] ************************ 2025-05-23 00:56:13.493929 | orchestrator | Friday 23 May 2025 00:48:15 +0000 (0:00:00.595) 0:05:00.710 ************ 2025-05-23 00:56:13.493934 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.493940 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.493945 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.493950 | orchestrator | 2025-05-23 00:56:13.493956 | orchestrator | TASK [ceph-mon : set_fact ceph-mon container command] ************************** 2025-05-23 00:56:13.493961 | orchestrator | Friday 23 May 2025 00:48:15 +0000 (0:00:00.358) 0:05:01.068 ************ 2025-05-23 00:56:13.493966 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.493972 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.493977 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.493982 | orchestrator | 2025-05-23 00:56:13.493987 | orchestrator | TASK [ceph-mon : ceph monitor mkfs with keyring] ******************************* 2025-05-23 00:56:13.493993 | orchestrator | Friday 23 May 2025 00:48:15 +0000 (0:00:00.359) 0:05:01.428 ************ 2025-05-23 00:56:13.493998 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:56:13.494004 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:56:13.494009 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:56:13.494031 | orchestrator | 2025-05-23 00:56:13.494037 | orchestrator | TASK [ceph-mon : ceph monitor mkfs without keyring] **************************** 2025-05-23 00:56:13.494043 | orchestrator | Friday 23 May 2025 00:48:17 +0000 (0:00:01.278) 0:05:02.706 ************ 2025-05-23 00:56:13.494048 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.494054 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.494059 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.494065 | orchestrator | 2025-05-23 00:56:13.494085 | orchestrator | TASK [ceph-mon : include start_monitor.yml] ************************************ 2025-05-23 00:56:13.494091 | orchestrator | Friday 23 May 2025 00:48:17 +0000 (0:00:00.617) 0:05:03.324 ************ 2025-05-23 00:56:13.494097 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:56:13.494102 | orchestrator | 2025-05-23 00:56:13.494112 | orchestrator | TASK [ceph-mon : ensure systemd service override directory exists] ************* 2025-05-23 00:56:13.494126 | orchestrator | Friday 23 May 2025 00:48:18 +0000 (0:00:00.599) 0:05:03.923 ************ 2025-05-23 00:56:13.494131 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.494136 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.494140 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.494150 | orchestrator | 2025-05-23 00:56:13.494155 | orchestrator | TASK [ceph-mon : add ceph-mon systemd service overrides] *********************** 2025-05-23 00:56:13.494160 | orchestrator | Friday 23 May 2025 00:48:18 +0000 (0:00:00.308) 0:05:04.231 ************ 2025-05-23 00:56:13.494165 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.494170 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.494177 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.494182 | orchestrator | 2025-05-23 00:56:13.494187 | orchestrator | TASK [ceph-mon : include_tasks systemd.yml] ************************************ 2025-05-23 00:56:13.494192 | orchestrator | Friday 23 May 2025 00:48:19 +0000 (0:00:00.590) 0:05:04.822 ************ 2025-05-23 00:56:13.494197 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:56:13.494202 | orchestrator | 2025-05-23 00:56:13.494206 | orchestrator | TASK [ceph-mon : generate systemd unit file for mon container] ***************** 2025-05-23 00:56:13.494211 | orchestrator | Friday 23 May 2025 00:48:19 +0000 (0:00:00.595) 0:05:05.418 ************ 2025-05-23 00:56:13.494216 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:56:13.494221 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:56:13.494225 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:56:13.494230 | orchestrator | 2025-05-23 00:56:13.494235 | orchestrator | TASK [ceph-mon : generate systemd ceph-mon target file] ************************ 2025-05-23 00:56:13.494240 | orchestrator | Friday 23 May 2025 00:48:21 +0000 (0:00:01.166) 0:05:06.585 ************ 2025-05-23 00:56:13.494244 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:56:13.494249 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:56:13.494254 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:56:13.494258 | orchestrator | 2025-05-23 00:56:13.494263 | orchestrator | TASK [ceph-mon : enable ceph-mon.target] *************************************** 2025-05-23 00:56:13.494268 | orchestrator | Friday 23 May 2025 00:48:22 +0000 (0:00:01.663) 0:05:08.249 ************ 2025-05-23 00:56:13.494273 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:56:13.494277 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:56:13.494282 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:56:13.494287 | orchestrator | 2025-05-23 00:56:13.494291 | orchestrator | TASK [ceph-mon : start the monitor service] ************************************ 2025-05-23 00:56:13.494296 | orchestrator | Friday 23 May 2025 00:48:24 +0000 (0:00:01.740) 0:05:09.990 ************ 2025-05-23 00:56:13.494301 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:56:13.494306 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:56:13.494310 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:56:13.494315 | orchestrator | 2025-05-23 00:56:13.494320 | orchestrator | TASK [ceph-mon : include_tasks ceph_keys.yml] ********************************** 2025-05-23 00:56:13.494325 | orchestrator | Friday 23 May 2025 00:48:26 +0000 (0:00:01.824) 0:05:11.815 ************ 2025-05-23 00:56:13.494329 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:56:13.494334 | orchestrator | 2025-05-23 00:56:13.494339 | orchestrator | TASK [ceph-mon : waiting for the monitor(s) to form the quorum...] ************* 2025-05-23 00:56:13.494344 | orchestrator | Friday 23 May 2025 00:48:27 +0000 (0:00:00.828) 0:05:12.643 ************ 2025-05-23 00:56:13.494348 | orchestrator | FAILED - RETRYING: [testbed-node-0]: waiting for the monitor(s) to form the quorum... (10 retries left). 2025-05-23 00:56:13.494353 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.494358 | orchestrator | 2025-05-23 00:56:13.494363 | orchestrator | TASK [ceph-mon : fetch ceph initial keys] ************************************** 2025-05-23 00:56:13.494367 | orchestrator | Friday 23 May 2025 00:48:48 +0000 (0:00:21.496) 0:05:34.139 ************ 2025-05-23 00:56:13.494376 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.494381 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.494406 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.494411 | orchestrator | 2025-05-23 00:56:13.494416 | orchestrator | TASK [ceph-mon : include secure_cluster.yml] *********************************** 2025-05-23 00:56:13.494421 | orchestrator | Friday 23 May 2025 00:48:56 +0000 (0:00:07.435) 0:05:41.575 ************ 2025-05-23 00:56:13.494425 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.494430 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.494435 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.494440 | orchestrator | 2025-05-23 00:56:13.494444 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-23 00:56:13.494449 | orchestrator | Friday 23 May 2025 00:48:57 +0000 (0:00:01.234) 0:05:42.810 ************ 2025-05-23 00:56:13.494454 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:56:13.494458 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:56:13.494463 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:56:13.494468 | orchestrator | 2025-05-23 00:56:13.494472 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-05-23 00:56:13.494477 | orchestrator | Friday 23 May 2025 00:48:58 +0000 (0:00:00.713) 0:05:43.523 ************ 2025-05-23 00:56:13.494482 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:56:13.494487 | orchestrator | 2025-05-23 00:56:13.494491 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called before restart] ******** 2025-05-23 00:56:13.494496 | orchestrator | Friday 23 May 2025 00:48:58 +0000 (0:00:00.794) 0:05:44.317 ************ 2025-05-23 00:56:13.494501 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.494505 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.494526 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.494532 | orchestrator | 2025-05-23 00:56:13.494536 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-05-23 00:56:13.494541 | orchestrator | Friday 23 May 2025 00:48:59 +0000 (0:00:00.348) 0:05:44.666 ************ 2025-05-23 00:56:13.494546 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:56:13.494551 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:56:13.494555 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:56:13.494560 | orchestrator | 2025-05-23 00:56:13.494565 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mon daemon(s)] ******************** 2025-05-23 00:56:13.494570 | orchestrator | Friday 23 May 2025 00:49:00 +0000 (0:00:01.196) 0:05:45.863 ************ 2025-05-23 00:56:13.494575 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-23 00:56:13.494579 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-23 00:56:13.494584 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-23 00:56:13.494589 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.494594 | orchestrator | 2025-05-23 00:56:13.494598 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called after restart] ********* 2025-05-23 00:56:13.494606 | orchestrator | Friday 23 May 2025 00:49:01 +0000 (0:00:00.970) 0:05:46.833 ************ 2025-05-23 00:56:13.494611 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.494615 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.494620 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.494625 | orchestrator | 2025-05-23 00:56:13.494629 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-23 00:56:13.494634 | orchestrator | Friday 23 May 2025 00:49:01 +0000 (0:00:00.339) 0:05:47.173 ************ 2025-05-23 00:56:13.494639 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:56:13.494644 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:56:13.494648 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:56:13.494653 | orchestrator | 2025-05-23 00:56:13.494658 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-05-23 00:56:13.494662 | orchestrator | 2025-05-23 00:56:13.494667 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-23 00:56:13.494676 | orchestrator | Friday 23 May 2025 00:49:03 +0000 (0:00:02.156) 0:05:49.330 ************ 2025-05-23 00:56:13.494681 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:56:13.494686 | orchestrator | 2025-05-23 00:56:13.494691 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-23 00:56:13.494695 | orchestrator | Friday 23 May 2025 00:49:04 +0000 (0:00:00.808) 0:05:50.139 ************ 2025-05-23 00:56:13.494700 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.494705 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.494710 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.494714 | orchestrator | 2025-05-23 00:56:13.494719 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-23 00:56:13.494724 | orchestrator | Friday 23 May 2025 00:49:05 +0000 (0:00:00.771) 0:05:50.910 ************ 2025-05-23 00:56:13.494728 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.494733 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.494738 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.494742 | orchestrator | 2025-05-23 00:56:13.494747 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-23 00:56:13.494752 | orchestrator | Friday 23 May 2025 00:49:05 +0000 (0:00:00.309) 0:05:51.220 ************ 2025-05-23 00:56:13.494757 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.494761 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.494766 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.494771 | orchestrator | 2025-05-23 00:56:13.494775 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-23 00:56:13.494780 | orchestrator | Friday 23 May 2025 00:49:06 +0000 (0:00:00.448) 0:05:51.668 ************ 2025-05-23 00:56:13.494785 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.494789 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.494794 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.494799 | orchestrator | 2025-05-23 00:56:13.494803 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-23 00:56:13.494808 | orchestrator | Friday 23 May 2025 00:49:06 +0000 (0:00:00.232) 0:05:51.901 ************ 2025-05-23 00:56:13.494813 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.494818 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.494822 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.494827 | orchestrator | 2025-05-23 00:56:13.494832 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-23 00:56:13.494836 | orchestrator | Friday 23 May 2025 00:49:07 +0000 (0:00:00.664) 0:05:52.565 ************ 2025-05-23 00:56:13.494841 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.494846 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.494851 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.494855 | orchestrator | 2025-05-23 00:56:13.494860 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-23 00:56:13.494865 | orchestrator | Friday 23 May 2025 00:49:07 +0000 (0:00:00.296) 0:05:52.861 ************ 2025-05-23 00:56:13.494869 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.494874 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.494879 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.494883 | orchestrator | 2025-05-23 00:56:13.494888 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-23 00:56:13.494893 | orchestrator | Friday 23 May 2025 00:49:07 +0000 (0:00:00.453) 0:05:53.315 ************ 2025-05-23 00:56:13.494897 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.494902 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.494907 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.494911 | orchestrator | 2025-05-23 00:56:13.494916 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-23 00:56:13.494921 | orchestrator | Friday 23 May 2025 00:49:08 +0000 (0:00:00.285) 0:05:53.600 ************ 2025-05-23 00:56:13.494929 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.494933 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.494938 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.494943 | orchestrator | 2025-05-23 00:56:13.494961 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-23 00:56:13.494967 | orchestrator | Friday 23 May 2025 00:49:08 +0000 (0:00:00.285) 0:05:53.886 ************ 2025-05-23 00:56:13.494972 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.494976 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.494981 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.494986 | orchestrator | 2025-05-23 00:56:13.494991 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-23 00:56:13.494995 | orchestrator | Friday 23 May 2025 00:49:08 +0000 (0:00:00.264) 0:05:54.151 ************ 2025-05-23 00:56:13.495000 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.495005 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.495010 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.495015 | orchestrator | 2025-05-23 00:56:13.495019 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-23 00:56:13.495024 | orchestrator | Friday 23 May 2025 00:49:09 +0000 (0:00:00.834) 0:05:54.985 ************ 2025-05-23 00:56:13.495029 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.495034 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.495041 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.495046 | orchestrator | 2025-05-23 00:56:13.495051 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-23 00:56:13.495055 | orchestrator | Friday 23 May 2025 00:49:09 +0000 (0:00:00.284) 0:05:55.269 ************ 2025-05-23 00:56:13.495060 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.495065 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.495070 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.495074 | orchestrator | 2025-05-23 00:56:13.495079 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-23 00:56:13.495084 | orchestrator | Friday 23 May 2025 00:49:10 +0000 (0:00:00.286) 0:05:55.555 ************ 2025-05-23 00:56:13.495089 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.495093 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.495098 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.495103 | orchestrator | 2025-05-23 00:56:13.495108 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-23 00:56:13.495112 | orchestrator | Friday 23 May 2025 00:49:10 +0000 (0:00:00.286) 0:05:55.842 ************ 2025-05-23 00:56:13.495117 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.495122 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.495126 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.495131 | orchestrator | 2025-05-23 00:56:13.495136 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-23 00:56:13.495140 | orchestrator | Friday 23 May 2025 00:49:10 +0000 (0:00:00.515) 0:05:56.357 ************ 2025-05-23 00:56:13.495145 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.495150 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.495154 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.495159 | orchestrator | 2025-05-23 00:56:13.495164 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-23 00:56:13.495168 | orchestrator | Friday 23 May 2025 00:49:11 +0000 (0:00:00.343) 0:05:56.700 ************ 2025-05-23 00:56:13.495173 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.495178 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.495182 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.495187 | orchestrator | 2025-05-23 00:56:13.495191 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-23 00:56:13.495196 | orchestrator | Friday 23 May 2025 00:49:11 +0000 (0:00:00.352) 0:05:57.053 ************ 2025-05-23 00:56:13.495204 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.495209 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.495214 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.495219 | orchestrator | 2025-05-23 00:56:13.495223 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-23 00:56:13.495228 | orchestrator | Friday 23 May 2025 00:49:11 +0000 (0:00:00.319) 0:05:57.372 ************ 2025-05-23 00:56:13.495233 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.495237 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.495242 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.495247 | orchestrator | 2025-05-23 00:56:13.495252 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-23 00:56:13.495257 | orchestrator | Friday 23 May 2025 00:49:12 +0000 (0:00:00.590) 0:05:57.963 ************ 2025-05-23 00:56:13.495261 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.495266 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.495271 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.495275 | orchestrator | 2025-05-23 00:56:13.495280 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-23 00:56:13.495285 | orchestrator | Friday 23 May 2025 00:49:12 +0000 (0:00:00.382) 0:05:58.345 ************ 2025-05-23 00:56:13.495289 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.495294 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.495299 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.495303 | orchestrator | 2025-05-23 00:56:13.495308 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-23 00:56:13.495313 | orchestrator | Friday 23 May 2025 00:49:13 +0000 (0:00:00.351) 0:05:58.697 ************ 2025-05-23 00:56:13.495317 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.495322 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.495327 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.495331 | orchestrator | 2025-05-23 00:56:13.495336 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-23 00:56:13.495341 | orchestrator | Friday 23 May 2025 00:49:13 +0000 (0:00:00.355) 0:05:59.053 ************ 2025-05-23 00:56:13.495346 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.495350 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.495355 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.495360 | orchestrator | 2025-05-23 00:56:13.495364 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-23 00:56:13.495369 | orchestrator | Friday 23 May 2025 00:49:14 +0000 (0:00:00.591) 0:05:59.644 ************ 2025-05-23 00:56:13.495374 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.495378 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.495396 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.495402 | orchestrator | 2025-05-23 00:56:13.495421 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-23 00:56:13.495427 | orchestrator | Friday 23 May 2025 00:49:14 +0000 (0:00:00.343) 0:05:59.988 ************ 2025-05-23 00:56:13.495432 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.495436 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.495441 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.495446 | orchestrator | 2025-05-23 00:56:13.495451 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-23 00:56:13.495455 | orchestrator | Friday 23 May 2025 00:49:14 +0000 (0:00:00.334) 0:06:00.323 ************ 2025-05-23 00:56:13.495460 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.495465 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.495470 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.495474 | orchestrator | 2025-05-23 00:56:13.495479 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-23 00:56:13.495484 | orchestrator | Friday 23 May 2025 00:49:15 +0000 (0:00:00.308) 0:06:00.631 ************ 2025-05-23 00:56:13.495488 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.495497 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.495504 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.495509 | orchestrator | 2025-05-23 00:56:13.495514 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-23 00:56:13.495519 | orchestrator | Friday 23 May 2025 00:49:15 +0000 (0:00:00.598) 0:06:01.230 ************ 2025-05-23 00:56:13.495524 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.495529 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.495533 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.495538 | orchestrator | 2025-05-23 00:56:13.495543 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-23 00:56:13.495547 | orchestrator | Friday 23 May 2025 00:49:16 +0000 (0:00:00.328) 0:06:01.559 ************ 2025-05-23 00:56:13.495552 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.495557 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.495562 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.495566 | orchestrator | 2025-05-23 00:56:13.495571 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-23 00:56:13.495576 | orchestrator | Friday 23 May 2025 00:49:16 +0000 (0:00:00.337) 0:06:01.897 ************ 2025-05-23 00:56:13.495581 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.495586 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.495590 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.495595 | orchestrator | 2025-05-23 00:56:13.495600 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-23 00:56:13.495604 | orchestrator | Friday 23 May 2025 00:49:16 +0000 (0:00:00.333) 0:06:02.230 ************ 2025-05-23 00:56:13.495609 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.495614 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.495619 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.495623 | orchestrator | 2025-05-23 00:56:13.495628 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-23 00:56:13.495633 | orchestrator | Friday 23 May 2025 00:49:17 +0000 (0:00:00.607) 0:06:02.838 ************ 2025-05-23 00:56:13.495637 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.495642 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.495647 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.495651 | orchestrator | 2025-05-23 00:56:13.495656 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-23 00:56:13.495661 | orchestrator | Friday 23 May 2025 00:49:17 +0000 (0:00:00.365) 0:06:03.203 ************ 2025-05-23 00:56:13.495666 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-23 00:56:13.495671 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-23 00:56:13.495675 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-23 00:56:13.495680 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-23 00:56:13.495685 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.495690 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.495694 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-23 00:56:13.495699 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-23 00:56:13.495704 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.495709 | orchestrator | 2025-05-23 00:56:13.495713 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-23 00:56:13.495718 | orchestrator | Friday 23 May 2025 00:49:18 +0000 (0:00:00.408) 0:06:03.612 ************ 2025-05-23 00:56:13.495723 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-23 00:56:13.495728 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-23 00:56:13.495732 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-23 00:56:13.495737 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-23 00:56:13.495742 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.495749 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.495754 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-23 00:56:13.495759 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-23 00:56:13.495764 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.495768 | orchestrator | 2025-05-23 00:56:13.495773 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-23 00:56:13.495778 | orchestrator | Friday 23 May 2025 00:49:18 +0000 (0:00:00.388) 0:06:04.000 ************ 2025-05-23 00:56:13.495782 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.495787 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.495792 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.495796 | orchestrator | 2025-05-23 00:56:13.495801 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-23 00:56:13.495806 | orchestrator | Friday 23 May 2025 00:49:19 +0000 (0:00:00.601) 0:06:04.602 ************ 2025-05-23 00:56:13.495810 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.495815 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.495820 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.495824 | orchestrator | 2025-05-23 00:56:13.495843 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-23 00:56:13.495850 | orchestrator | Friday 23 May 2025 00:49:19 +0000 (0:00:00.327) 0:06:04.930 ************ 2025-05-23 00:56:13.495854 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.495859 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.495864 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.495869 | orchestrator | 2025-05-23 00:56:13.495873 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-23 00:56:13.495878 | orchestrator | Friday 23 May 2025 00:49:19 +0000 (0:00:00.335) 0:06:05.266 ************ 2025-05-23 00:56:13.495883 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.495888 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.495892 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.495897 | orchestrator | 2025-05-23 00:56:13.495902 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-23 00:56:13.495907 | orchestrator | Friday 23 May 2025 00:49:20 +0000 (0:00:00.390) 0:06:05.656 ************ 2025-05-23 00:56:13.495916 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.495921 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.495925 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.495930 | orchestrator | 2025-05-23 00:56:13.495935 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-23 00:56:13.495940 | orchestrator | Friday 23 May 2025 00:49:20 +0000 (0:00:00.707) 0:06:06.364 ************ 2025-05-23 00:56:13.495944 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.495949 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.495954 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.495959 | orchestrator | 2025-05-23 00:56:13.495964 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-23 00:56:13.495968 | orchestrator | Friday 23 May 2025 00:49:21 +0000 (0:00:00.344) 0:06:06.709 ************ 2025-05-23 00:56:13.495973 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-23 00:56:13.495978 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-23 00:56:13.495982 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-23 00:56:13.495987 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.495992 | orchestrator | 2025-05-23 00:56:13.495996 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-23 00:56:13.496001 | orchestrator | Friday 23 May 2025 00:49:21 +0000 (0:00:00.536) 0:06:07.246 ************ 2025-05-23 00:56:13.496006 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-23 00:56:13.496011 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-23 00:56:13.496018 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-23 00:56:13.496023 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.496028 | orchestrator | 2025-05-23 00:56:13.496033 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-23 00:56:13.496037 | orchestrator | Friday 23 May 2025 00:49:22 +0000 (0:00:00.538) 0:06:07.784 ************ 2025-05-23 00:56:13.496042 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-23 00:56:13.496047 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-23 00:56:13.496052 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-23 00:56:13.496056 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.496061 | orchestrator | 2025-05-23 00:56:13.496066 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-23 00:56:13.496070 | orchestrator | Friday 23 May 2025 00:49:22 +0000 (0:00:00.397) 0:06:08.182 ************ 2025-05-23 00:56:13.496075 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.496080 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.496084 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.496089 | orchestrator | 2025-05-23 00:56:13.496094 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-23 00:56:13.496098 | orchestrator | Friday 23 May 2025 00:49:23 +0000 (0:00:00.454) 0:06:08.636 ************ 2025-05-23 00:56:13.496103 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-23 00:56:13.496108 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.496113 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-23 00:56:13.496117 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.496122 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-23 00:56:13.496127 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.496132 | orchestrator | 2025-05-23 00:56:13.496136 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-23 00:56:13.496141 | orchestrator | Friday 23 May 2025 00:49:23 +0000 (0:00:00.605) 0:06:09.242 ************ 2025-05-23 00:56:13.496146 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.496150 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.496155 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.496160 | orchestrator | 2025-05-23 00:56:13.496165 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-23 00:56:13.496169 | orchestrator | Friday 23 May 2025 00:49:24 +0000 (0:00:00.331) 0:06:09.573 ************ 2025-05-23 00:56:13.496174 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.496179 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.496183 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.496188 | orchestrator | 2025-05-23 00:56:13.496193 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-23 00:56:13.496197 | orchestrator | Friday 23 May 2025 00:49:24 +0000 (0:00:00.304) 0:06:09.877 ************ 2025-05-23 00:56:13.496202 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-23 00:56:13.496207 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.496211 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-23 00:56:13.496216 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.496221 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-23 00:56:13.496226 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.496230 | orchestrator | 2025-05-23 00:56:13.496235 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-23 00:56:13.496254 | orchestrator | Friday 23 May 2025 00:49:25 +0000 (0:00:00.740) 0:06:10.618 ************ 2025-05-23 00:56:13.496259 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.496264 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.496269 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.496273 | orchestrator | 2025-05-23 00:56:13.496278 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-23 00:56:13.496286 | orchestrator | Friday 23 May 2025 00:49:25 +0000 (0:00:00.306) 0:06:10.925 ************ 2025-05-23 00:56:13.496291 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-23 00:56:13.496296 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-23 00:56:13.496301 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-23 00:56:13.496305 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.496310 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-23 00:56:13.496315 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-23 00:56:13.496322 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-23 00:56:13.496327 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.496331 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-23 00:56:13.496336 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-23 00:56:13.496341 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-23 00:56:13.496346 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.496350 | orchestrator | 2025-05-23 00:56:13.496355 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-23 00:56:13.496360 | orchestrator | Friday 23 May 2025 00:49:26 +0000 (0:00:00.575) 0:06:11.501 ************ 2025-05-23 00:56:13.496365 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.496370 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.496374 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.496379 | orchestrator | 2025-05-23 00:56:13.496414 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-23 00:56:13.496420 | orchestrator | Friday 23 May 2025 00:49:26 +0000 (0:00:00.638) 0:06:12.139 ************ 2025-05-23 00:56:13.496425 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.496430 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.496435 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.496440 | orchestrator | 2025-05-23 00:56:13.496444 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-23 00:56:13.496449 | orchestrator | Friday 23 May 2025 00:49:27 +0000 (0:00:00.483) 0:06:12.622 ************ 2025-05-23 00:56:13.496454 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.496459 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.496463 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.496468 | orchestrator | 2025-05-23 00:56:13.496473 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-23 00:56:13.496477 | orchestrator | Friday 23 May 2025 00:49:27 +0000 (0:00:00.684) 0:06:13.307 ************ 2025-05-23 00:56:13.496482 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.496487 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.496492 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.496496 | orchestrator | 2025-05-23 00:56:13.496501 | orchestrator | TASK [ceph-mgr : set_fact container_exec_cmd] ********************************** 2025-05-23 00:56:13.496506 | orchestrator | Friday 23 May 2025 00:49:28 +0000 (0:00:00.480) 0:06:13.787 ************ 2025-05-23 00:56:13.496511 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-23 00:56:13.496515 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-23 00:56:13.496520 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-23 00:56:13.496525 | orchestrator | 2025-05-23 00:56:13.496530 | orchestrator | TASK [ceph-mgr : include common.yml] ******************************************* 2025-05-23 00:56:13.496534 | orchestrator | Friday 23 May 2025 00:49:29 +0000 (0:00:00.909) 0:06:14.696 ************ 2025-05-23 00:56:13.496539 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:56:13.496544 | orchestrator | 2025-05-23 00:56:13.496549 | orchestrator | TASK [ceph-mgr : create mgr directory] ***************************************** 2025-05-23 00:56:13.496557 | orchestrator | Friday 23 May 2025 00:49:29 +0000 (0:00:00.486) 0:06:15.183 ************ 2025-05-23 00:56:13.496562 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:56:13.496567 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:56:13.496571 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:56:13.496576 | orchestrator | 2025-05-23 00:56:13.496581 | orchestrator | TASK [ceph-mgr : fetch ceph mgr keyring] *************************************** 2025-05-23 00:56:13.496586 | orchestrator | Friday 23 May 2025 00:49:30 +0000 (0:00:00.619) 0:06:15.803 ************ 2025-05-23 00:56:13.496590 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.496595 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.496600 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.496604 | orchestrator | 2025-05-23 00:56:13.496609 | orchestrator | TASK [ceph-mgr : create ceph mgr keyring(s) on a mon node] ********************* 2025-05-23 00:56:13.496614 | orchestrator | Friday 23 May 2025 00:49:30 +0000 (0:00:00.417) 0:06:16.220 ************ 2025-05-23 00:56:13.496618 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-23 00:56:13.496623 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-23 00:56:13.496628 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-23 00:56:13.496633 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-05-23 00:56:13.496637 | orchestrator | 2025-05-23 00:56:13.496642 | orchestrator | TASK [ceph-mgr : set_fact _mgr_keys] ******************************************* 2025-05-23 00:56:13.496647 | orchestrator | Friday 23 May 2025 00:49:38 +0000 (0:00:07.946) 0:06:24.167 ************ 2025-05-23 00:56:13.496651 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.496656 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.496661 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.496666 | orchestrator | 2025-05-23 00:56:13.496685 | orchestrator | TASK [ceph-mgr : get keys from monitors] *************************************** 2025-05-23 00:56:13.496691 | orchestrator | Friday 23 May 2025 00:49:39 +0000 (0:00:00.461) 0:06:24.629 ************ 2025-05-23 00:56:13.496696 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-23 00:56:13.496700 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-23 00:56:13.496705 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-23 00:56:13.496710 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-05-23 00:56:13.496715 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-23 00:56:13.496719 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-23 00:56:13.496724 | orchestrator | 2025-05-23 00:56:13.496729 | orchestrator | TASK [ceph-mgr : copy ceph key(s) if needed] *********************************** 2025-05-23 00:56:13.496734 | orchestrator | Friday 23 May 2025 00:49:41 +0000 (0:00:02.151) 0:06:26.780 ************ 2025-05-23 00:56:13.496738 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-23 00:56:13.496743 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-23 00:56:13.496751 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-23 00:56:13.496756 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-23 00:56:13.496760 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-05-23 00:56:13.496765 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-05-23 00:56:13.496770 | orchestrator | 2025-05-23 00:56:13.496775 | orchestrator | TASK [ceph-mgr : set mgr key permissions] ************************************** 2025-05-23 00:56:13.496779 | orchestrator | Friday 23 May 2025 00:49:42 +0000 (0:00:01.320) 0:06:28.101 ************ 2025-05-23 00:56:13.496784 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.496789 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.496794 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.496798 | orchestrator | 2025-05-23 00:56:13.496803 | orchestrator | TASK [ceph-mgr : append dashboard modules to ceph_mgr_modules] ***************** 2025-05-23 00:56:13.496808 | orchestrator | Friday 23 May 2025 00:49:43 +0000 (0:00:00.828) 0:06:28.929 ************ 2025-05-23 00:56:13.496812 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.496817 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.496825 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.496830 | orchestrator | 2025-05-23 00:56:13.496835 | orchestrator | TASK [ceph-mgr : include pre_requisite.yml] ************************************ 2025-05-23 00:56:13.496840 | orchestrator | Friday 23 May 2025 00:49:44 +0000 (0:00:00.733) 0:06:29.663 ************ 2025-05-23 00:56:13.496844 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.496849 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.496854 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.496858 | orchestrator | 2025-05-23 00:56:13.496863 | orchestrator | TASK [ceph-mgr : include start_mgr.yml] **************************************** 2025-05-23 00:56:13.496868 | orchestrator | Friday 23 May 2025 00:49:44 +0000 (0:00:00.348) 0:06:30.012 ************ 2025-05-23 00:56:13.496873 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:56:13.496878 | orchestrator | 2025-05-23 00:56:13.496883 | orchestrator | TASK [ceph-mgr : ensure systemd service override directory exists] ************* 2025-05-23 00:56:13.496887 | orchestrator | Friday 23 May 2025 00:49:45 +0000 (0:00:00.601) 0:06:30.614 ************ 2025-05-23 00:56:13.496892 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.496897 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.496902 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.496906 | orchestrator | 2025-05-23 00:56:13.496911 | orchestrator | TASK [ceph-mgr : add ceph-mgr systemd service overrides] *********************** 2025-05-23 00:56:13.496916 | orchestrator | Friday 23 May 2025 00:49:45 +0000 (0:00:00.588) 0:06:31.202 ************ 2025-05-23 00:56:13.496920 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.496925 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.496930 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.496935 | orchestrator | 2025-05-23 00:56:13.496939 | orchestrator | TASK [ceph-mgr : include_tasks systemd.yml] ************************************ 2025-05-23 00:56:13.496944 | orchestrator | Friday 23 May 2025 00:49:46 +0000 (0:00:00.359) 0:06:31.562 ************ 2025-05-23 00:56:13.496949 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:56:13.496954 | orchestrator | 2025-05-23 00:56:13.496959 | orchestrator | TASK [ceph-mgr : generate systemd unit file] *********************************** 2025-05-23 00:56:13.496963 | orchestrator | Friday 23 May 2025 00:49:46 +0000 (0:00:00.533) 0:06:32.095 ************ 2025-05-23 00:56:13.496968 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:56:13.496973 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:56:13.496977 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:56:13.496982 | orchestrator | 2025-05-23 00:56:13.496987 | orchestrator | TASK [ceph-mgr : generate systemd ceph-mgr target file] ************************ 2025-05-23 00:56:13.496992 | orchestrator | Friday 23 May 2025 00:49:48 +0000 (0:00:01.428) 0:06:33.524 ************ 2025-05-23 00:56:13.496997 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:56:13.497001 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:56:13.497006 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:56:13.497011 | orchestrator | 2025-05-23 00:56:13.497015 | orchestrator | TASK [ceph-mgr : enable ceph-mgr.target] *************************************** 2025-05-23 00:56:13.497020 | orchestrator | Friday 23 May 2025 00:49:49 +0000 (0:00:01.160) 0:06:34.684 ************ 2025-05-23 00:56:13.497025 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:56:13.497029 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:56:13.497034 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:56:13.497039 | orchestrator | 2025-05-23 00:56:13.497043 | orchestrator | TASK [ceph-mgr : systemd start mgr] ******************************************** 2025-05-23 00:56:13.497048 | orchestrator | Friday 23 May 2025 00:49:50 +0000 (0:00:01.661) 0:06:36.346 ************ 2025-05-23 00:56:13.497053 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:56:13.497058 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:56:13.497062 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:56:13.497067 | orchestrator | 2025-05-23 00:56:13.497072 | orchestrator | TASK [ceph-mgr : include mgr_modules.yml] ************************************** 2025-05-23 00:56:13.497093 | orchestrator | Friday 23 May 2025 00:49:52 +0000 (0:00:02.115) 0:06:38.462 ************ 2025-05-23 00:56:13.497099 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.497103 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.497108 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-05-23 00:56:13.497113 | orchestrator | 2025-05-23 00:56:13.497118 | orchestrator | TASK [ceph-mgr : wait for all mgr to be up] ************************************ 2025-05-23 00:56:13.497122 | orchestrator | Friday 23 May 2025 00:49:53 +0000 (0:00:00.777) 0:06:39.239 ************ 2025-05-23 00:56:13.497127 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (30 retries left). 2025-05-23 00:56:13.497132 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (29 retries left). 2025-05-23 00:56:13.497137 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-05-23 00:56:13.497141 | orchestrator | 2025-05-23 00:56:13.497146 | orchestrator | TASK [ceph-mgr : get enabled modules from ceph-mgr] **************************** 2025-05-23 00:56:13.497151 | orchestrator | Friday 23 May 2025 00:50:07 +0000 (0:00:13.293) 0:06:52.533 ************ 2025-05-23 00:56:13.497156 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-05-23 00:56:13.497161 | orchestrator | 2025-05-23 00:56:13.497166 | orchestrator | TASK [ceph-mgr : set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-05-23 00:56:13.497170 | orchestrator | Friday 23 May 2025 00:50:08 +0000 (0:00:01.727) 0:06:54.261 ************ 2025-05-23 00:56:13.497175 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.497180 | orchestrator | 2025-05-23 00:56:13.497184 | orchestrator | TASK [ceph-mgr : set _disabled_ceph_mgr_modules fact] ************************** 2025-05-23 00:56:13.497189 | orchestrator | Friday 23 May 2025 00:50:09 +0000 (0:00:00.485) 0:06:54.747 ************ 2025-05-23 00:56:13.497194 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.497199 | orchestrator | 2025-05-23 00:56:13.497203 | orchestrator | TASK [ceph-mgr : disable ceph mgr enabled modules] ***************************** 2025-05-23 00:56:13.497211 | orchestrator | Friday 23 May 2025 00:50:09 +0000 (0:00:00.327) 0:06:55.074 ************ 2025-05-23 00:56:13.497219 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-05-23 00:56:13.497228 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-05-23 00:56:13.497236 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-05-23 00:56:13.497244 | orchestrator | 2025-05-23 00:56:13.497252 | orchestrator | TASK [ceph-mgr : add modules to ceph-mgr] ************************************** 2025-05-23 00:56:13.497261 | orchestrator | Friday 23 May 2025 00:50:15 +0000 (0:00:06.123) 0:07:01.197 ************ 2025-05-23 00:56:13.497269 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-05-23 00:56:13.497277 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-05-23 00:56:13.497286 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-05-23 00:56:13.497295 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-05-23 00:56:13.497300 | orchestrator | 2025-05-23 00:56:13.497304 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-23 00:56:13.497309 | orchestrator | Friday 23 May 2025 00:50:20 +0000 (0:00:04.940) 0:07:06.138 ************ 2025-05-23 00:56:13.497314 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:56:13.497318 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:56:13.497323 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:56:13.497328 | orchestrator | 2025-05-23 00:56:13.497373 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-05-23 00:56:13.497448 | orchestrator | Friday 23 May 2025 00:50:21 +0000 (0:00:00.730) 0:07:06.869 ************ 2025-05-23 00:56:13.497455 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:56:13.497464 | orchestrator | 2025-05-23 00:56:13.497469 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-05-23 00:56:13.497474 | orchestrator | Friday 23 May 2025 00:50:22 +0000 (0:00:01.115) 0:07:07.984 ************ 2025-05-23 00:56:13.497479 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.497484 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.497488 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.497493 | orchestrator | 2025-05-23 00:56:13.497498 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-05-23 00:56:13.497502 | orchestrator | Friday 23 May 2025 00:50:22 +0000 (0:00:00.370) 0:07:08.354 ************ 2025-05-23 00:56:13.497507 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:56:13.497512 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:56:13.497517 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:56:13.497521 | orchestrator | 2025-05-23 00:56:13.497526 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-05-23 00:56:13.497531 | orchestrator | Friday 23 May 2025 00:50:24 +0000 (0:00:01.273) 0:07:09.627 ************ 2025-05-23 00:56:13.497535 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-23 00:56:13.497540 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-23 00:56:13.497545 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-23 00:56:13.497550 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.497555 | orchestrator | 2025-05-23 00:56:13.497560 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-05-23 00:56:13.497564 | orchestrator | Friday 23 May 2025 00:50:25 +0000 (0:00:01.329) 0:07:10.957 ************ 2025-05-23 00:56:13.497569 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.497574 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.497578 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.497583 | orchestrator | 2025-05-23 00:56:13.497588 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-23 00:56:13.497593 | orchestrator | Friday 23 May 2025 00:50:25 +0000 (0:00:00.411) 0:07:11.368 ************ 2025-05-23 00:56:13.497597 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:56:13.497626 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:56:13.497632 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:56:13.497636 | orchestrator | 2025-05-23 00:56:13.497641 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-05-23 00:56:13.497646 | orchestrator | 2025-05-23 00:56:13.497651 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-23 00:56:13.497655 | orchestrator | Friday 23 May 2025 00:50:27 +0000 (0:00:02.052) 0:07:13.420 ************ 2025-05-23 00:56:13.497660 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 00:56:13.497665 | orchestrator | 2025-05-23 00:56:13.497670 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-23 00:56:13.497675 | orchestrator | Friday 23 May 2025 00:50:28 +0000 (0:00:00.738) 0:07:14.159 ************ 2025-05-23 00:56:13.497680 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.497684 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.497689 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.497694 | orchestrator | 2025-05-23 00:56:13.497701 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-23 00:56:13.497706 | orchestrator | Friday 23 May 2025 00:50:28 +0000 (0:00:00.321) 0:07:14.480 ************ 2025-05-23 00:56:13.497711 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.497716 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.497720 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.497725 | orchestrator | 2025-05-23 00:56:13.497730 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-23 00:56:13.497735 | orchestrator | Friday 23 May 2025 00:50:29 +0000 (0:00:00.741) 0:07:15.222 ************ 2025-05-23 00:56:13.497744 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.497749 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.497754 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.497758 | orchestrator | 2025-05-23 00:56:13.497763 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-23 00:56:13.497768 | orchestrator | Friday 23 May 2025 00:50:30 +0000 (0:00:01.065) 0:07:16.288 ************ 2025-05-23 00:56:13.497773 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.497777 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.497782 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.497787 | orchestrator | 2025-05-23 00:56:13.497791 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-23 00:56:13.497796 | orchestrator | Friday 23 May 2025 00:50:31 +0000 (0:00:00.736) 0:07:17.024 ************ 2025-05-23 00:56:13.497801 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.497806 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.497810 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.497815 | orchestrator | 2025-05-23 00:56:13.497820 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-23 00:56:13.497825 | orchestrator | Friday 23 May 2025 00:50:31 +0000 (0:00:00.328) 0:07:17.352 ************ 2025-05-23 00:56:13.497829 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.497834 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.497839 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.497844 | orchestrator | 2025-05-23 00:56:13.497849 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-23 00:56:13.497853 | orchestrator | Friday 23 May 2025 00:50:32 +0000 (0:00:00.315) 0:07:17.668 ************ 2025-05-23 00:56:13.497858 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.497863 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.497867 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.497872 | orchestrator | 2025-05-23 00:56:13.497877 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-23 00:56:13.497882 | orchestrator | Friday 23 May 2025 00:50:32 +0000 (0:00:00.591) 0:07:18.259 ************ 2025-05-23 00:56:13.497886 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.497891 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.497896 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.497900 | orchestrator | 2025-05-23 00:56:13.497905 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-23 00:56:13.497910 | orchestrator | Friday 23 May 2025 00:50:33 +0000 (0:00:00.376) 0:07:18.635 ************ 2025-05-23 00:56:13.497915 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.497919 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.497924 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.497929 | orchestrator | 2025-05-23 00:56:13.497933 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-23 00:56:13.497938 | orchestrator | Friday 23 May 2025 00:50:33 +0000 (0:00:00.375) 0:07:19.010 ************ 2025-05-23 00:56:13.497943 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.497948 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.497952 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.497957 | orchestrator | 2025-05-23 00:56:13.497962 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-23 00:56:13.497967 | orchestrator | Friday 23 May 2025 00:50:33 +0000 (0:00:00.320) 0:07:19.331 ************ 2025-05-23 00:56:13.497971 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.497976 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.497981 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.497986 | orchestrator | 2025-05-23 00:56:13.497990 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-23 00:56:13.497995 | orchestrator | Friday 23 May 2025 00:50:35 +0000 (0:00:01.204) 0:07:20.535 ************ 2025-05-23 00:56:13.498000 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.498005 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.498042 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.498053 | orchestrator | 2025-05-23 00:56:13.498062 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-23 00:56:13.498069 | orchestrator | Friday 23 May 2025 00:50:35 +0000 (0:00:00.366) 0:07:20.902 ************ 2025-05-23 00:56:13.498082 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.498090 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.498098 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.498106 | orchestrator | 2025-05-23 00:56:13.498115 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-23 00:56:13.498147 | orchestrator | Friday 23 May 2025 00:50:35 +0000 (0:00:00.366) 0:07:21.269 ************ 2025-05-23 00:56:13.498153 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.498158 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.498163 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.498168 | orchestrator | 2025-05-23 00:56:13.498173 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-23 00:56:13.498178 | orchestrator | Friday 23 May 2025 00:50:36 +0000 (0:00:00.334) 0:07:21.603 ************ 2025-05-23 00:56:13.498182 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.498187 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.498192 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.498196 | orchestrator | 2025-05-23 00:56:13.498201 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-23 00:56:13.498206 | orchestrator | Friday 23 May 2025 00:50:36 +0000 (0:00:00.608) 0:07:22.211 ************ 2025-05-23 00:56:13.498210 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.498215 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.498220 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.498224 | orchestrator | 2025-05-23 00:56:13.498229 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-23 00:56:13.498237 | orchestrator | Friday 23 May 2025 00:50:37 +0000 (0:00:00.352) 0:07:22.564 ************ 2025-05-23 00:56:13.498242 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.498247 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.498252 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.498256 | orchestrator | 2025-05-23 00:56:13.498261 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-23 00:56:13.498266 | orchestrator | Friday 23 May 2025 00:50:37 +0000 (0:00:00.370) 0:07:22.935 ************ 2025-05-23 00:56:13.498270 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.498275 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.498280 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.498284 | orchestrator | 2025-05-23 00:56:13.498289 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-23 00:56:13.498294 | orchestrator | Friday 23 May 2025 00:50:37 +0000 (0:00:00.307) 0:07:23.243 ************ 2025-05-23 00:56:13.498298 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.498303 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.498308 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.498312 | orchestrator | 2025-05-23 00:56:13.498317 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-23 00:56:13.498322 | orchestrator | Friday 23 May 2025 00:50:38 +0000 (0:00:00.609) 0:07:23.853 ************ 2025-05-23 00:56:13.498326 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.498331 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.498336 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.498340 | orchestrator | 2025-05-23 00:56:13.498345 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-23 00:56:13.498350 | orchestrator | Friday 23 May 2025 00:50:38 +0000 (0:00:00.382) 0:07:24.235 ************ 2025-05-23 00:56:13.498355 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.498359 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.498364 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.498374 | orchestrator | 2025-05-23 00:56:13.498378 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-23 00:56:13.498396 | orchestrator | Friday 23 May 2025 00:50:39 +0000 (0:00:00.383) 0:07:24.619 ************ 2025-05-23 00:56:13.498404 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.498411 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.498416 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.498420 | orchestrator | 2025-05-23 00:56:13.498425 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-23 00:56:13.498430 | orchestrator | Friday 23 May 2025 00:50:39 +0000 (0:00:00.341) 0:07:24.961 ************ 2025-05-23 00:56:13.498434 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.498439 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.498444 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.498449 | orchestrator | 2025-05-23 00:56:13.498453 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-23 00:56:13.498458 | orchestrator | Friday 23 May 2025 00:50:40 +0000 (0:00:00.622) 0:07:25.583 ************ 2025-05-23 00:56:13.498463 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.498468 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.498472 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.498477 | orchestrator | 2025-05-23 00:56:13.498481 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-23 00:56:13.498486 | orchestrator | Friday 23 May 2025 00:50:40 +0000 (0:00:00.355) 0:07:25.939 ************ 2025-05-23 00:56:13.498491 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.498496 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.498500 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.498505 | orchestrator | 2025-05-23 00:56:13.498510 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-23 00:56:13.498515 | orchestrator | Friday 23 May 2025 00:50:40 +0000 (0:00:00.371) 0:07:26.311 ************ 2025-05-23 00:56:13.498519 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.498524 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.498529 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.498533 | orchestrator | 2025-05-23 00:56:13.498538 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-23 00:56:13.498543 | orchestrator | Friday 23 May 2025 00:50:41 +0000 (0:00:00.294) 0:07:26.606 ************ 2025-05-23 00:56:13.498547 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.498552 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.498557 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.498561 | orchestrator | 2025-05-23 00:56:13.498566 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-23 00:56:13.498571 | orchestrator | Friday 23 May 2025 00:50:41 +0000 (0:00:00.656) 0:07:27.262 ************ 2025-05-23 00:56:13.498575 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.498580 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.498586 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.498591 | orchestrator | 2025-05-23 00:56:13.498614 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-23 00:56:13.498620 | orchestrator | Friday 23 May 2025 00:50:42 +0000 (0:00:00.350) 0:07:27.613 ************ 2025-05-23 00:56:13.498626 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.498631 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.498636 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.498642 | orchestrator | 2025-05-23 00:56:13.498647 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-23 00:56:13.498652 | orchestrator | Friday 23 May 2025 00:50:42 +0000 (0:00:00.393) 0:07:28.007 ************ 2025-05-23 00:56:13.498657 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.498663 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.498668 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.498677 | orchestrator | 2025-05-23 00:56:13.498683 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-23 00:56:13.498688 | orchestrator | Friday 23 May 2025 00:50:42 +0000 (0:00:00.366) 0:07:28.373 ************ 2025-05-23 00:56:13.498693 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.498698 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.498706 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.498712 | orchestrator | 2025-05-23 00:56:13.498717 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-23 00:56:13.498722 | orchestrator | Friday 23 May 2025 00:50:43 +0000 (0:00:00.576) 0:07:28.950 ************ 2025-05-23 00:56:13.498728 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.498733 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.498738 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.498743 | orchestrator | 2025-05-23 00:56:13.498749 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-23 00:56:13.498754 | orchestrator | Friday 23 May 2025 00:50:43 +0000 (0:00:00.319) 0:07:29.270 ************ 2025-05-23 00:56:13.498759 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-23 00:56:13.498765 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-23 00:56:13.498770 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.498775 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-23 00:56:13.498780 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-23 00:56:13.498785 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.498791 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-23 00:56:13.498796 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-23 00:56:13.498801 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.498806 | orchestrator | 2025-05-23 00:56:13.498812 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-23 00:56:13.498817 | orchestrator | Friday 23 May 2025 00:50:44 +0000 (0:00:00.370) 0:07:29.641 ************ 2025-05-23 00:56:13.498822 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-23 00:56:13.498828 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-23 00:56:13.498833 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.498838 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-23 00:56:13.498843 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-23 00:56:13.498848 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.498854 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-23 00:56:13.498859 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-23 00:56:13.498864 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.498870 | orchestrator | 2025-05-23 00:56:13.498875 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-23 00:56:13.498880 | orchestrator | Friday 23 May 2025 00:50:44 +0000 (0:00:00.363) 0:07:30.005 ************ 2025-05-23 00:56:13.498885 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.498891 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.498896 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.498901 | orchestrator | 2025-05-23 00:56:13.498906 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-23 00:56:13.498912 | orchestrator | Friday 23 May 2025 00:50:45 +0000 (0:00:00.762) 0:07:30.767 ************ 2025-05-23 00:56:13.498917 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.498922 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.498927 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.498933 | orchestrator | 2025-05-23 00:56:13.498938 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-23 00:56:13.498944 | orchestrator | Friday 23 May 2025 00:50:45 +0000 (0:00:00.359) 0:07:31.127 ************ 2025-05-23 00:56:13.498952 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.498958 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.498963 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.498968 | orchestrator | 2025-05-23 00:56:13.498974 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-23 00:56:13.498979 | orchestrator | Friday 23 May 2025 00:50:45 +0000 (0:00:00.337) 0:07:31.464 ************ 2025-05-23 00:56:13.498984 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.498989 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.498995 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.499000 | orchestrator | 2025-05-23 00:56:13.499005 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-23 00:56:13.499011 | orchestrator | Friday 23 May 2025 00:50:46 +0000 (0:00:00.353) 0:07:31.817 ************ 2025-05-23 00:56:13.499016 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.499021 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.499026 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.499031 | orchestrator | 2025-05-23 00:56:13.499037 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-23 00:56:13.499042 | orchestrator | Friday 23 May 2025 00:50:46 +0000 (0:00:00.669) 0:07:32.487 ************ 2025-05-23 00:56:13.499047 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.499052 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.499058 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.499063 | orchestrator | 2025-05-23 00:56:13.499084 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-23 00:56:13.499091 | orchestrator | Friday 23 May 2025 00:50:47 +0000 (0:00:00.342) 0:07:32.830 ************ 2025-05-23 00:56:13.499096 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-23 00:56:13.499101 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-23 00:56:13.499106 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-23 00:56:13.499112 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.499117 | orchestrator | 2025-05-23 00:56:13.499122 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-23 00:56:13.499128 | orchestrator | Friday 23 May 2025 00:50:47 +0000 (0:00:00.438) 0:07:33.268 ************ 2025-05-23 00:56:13.499133 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-23 00:56:13.499138 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-23 00:56:13.499143 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-23 00:56:13.499149 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.499154 | orchestrator | 2025-05-23 00:56:13.499162 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-23 00:56:13.499168 | orchestrator | Friday 23 May 2025 00:50:48 +0000 (0:00:00.421) 0:07:33.690 ************ 2025-05-23 00:56:13.499173 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-23 00:56:13.499179 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-23 00:56:13.499184 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-23 00:56:13.499189 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.499194 | orchestrator | 2025-05-23 00:56:13.499200 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-23 00:56:13.499205 | orchestrator | Friday 23 May 2025 00:50:48 +0000 (0:00:00.406) 0:07:34.096 ************ 2025-05-23 00:56:13.499210 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.499215 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.499221 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.499226 | orchestrator | 2025-05-23 00:56:13.499231 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-23 00:56:13.499236 | orchestrator | Friday 23 May 2025 00:50:49 +0000 (0:00:00.398) 0:07:34.494 ************ 2025-05-23 00:56:13.499242 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-23 00:56:13.499251 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.499256 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-23 00:56:13.499261 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.499266 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-23 00:56:13.499272 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.499277 | orchestrator | 2025-05-23 00:56:13.499282 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-23 00:56:13.499287 | orchestrator | Friday 23 May 2025 00:50:49 +0000 (0:00:00.789) 0:07:35.284 ************ 2025-05-23 00:56:13.499293 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.499298 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.499303 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.499308 | orchestrator | 2025-05-23 00:56:13.499313 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-23 00:56:13.499319 | orchestrator | Friday 23 May 2025 00:50:50 +0000 (0:00:00.345) 0:07:35.630 ************ 2025-05-23 00:56:13.499324 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.499329 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.499335 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.499340 | orchestrator | 2025-05-23 00:56:13.499345 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-23 00:56:13.499350 | orchestrator | Friday 23 May 2025 00:50:50 +0000 (0:00:00.351) 0:07:35.981 ************ 2025-05-23 00:56:13.499356 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-23 00:56:13.499361 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.499366 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-23 00:56:13.499371 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.499377 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-23 00:56:13.499382 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.499423 | orchestrator | 2025-05-23 00:56:13.499428 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-23 00:56:13.499434 | orchestrator | Friday 23 May 2025 00:50:51 +0000 (0:00:01.145) 0:07:37.127 ************ 2025-05-23 00:56:13.499439 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-23 00:56:13.499445 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.499450 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-23 00:56:13.499455 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.499461 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-23 00:56:13.499466 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.499472 | orchestrator | 2025-05-23 00:56:13.499477 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-23 00:56:13.499482 | orchestrator | Friday 23 May 2025 00:50:52 +0000 (0:00:00.458) 0:07:37.585 ************ 2025-05-23 00:56:13.499487 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-23 00:56:13.499493 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-23 00:56:13.499498 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-23 00:56:13.499503 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.499508 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-23 00:56:13.499514 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-23 00:56:13.499536 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-23 00:56:13.499543 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.499548 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-23 00:56:13.499553 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-23 00:56:13.499562 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-23 00:56:13.499568 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.499573 | orchestrator | 2025-05-23 00:56:13.499578 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-23 00:56:13.499584 | orchestrator | Friday 23 May 2025 00:50:52 +0000 (0:00:00.792) 0:07:38.378 ************ 2025-05-23 00:56:13.499589 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.499594 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.499600 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.499605 | orchestrator | 2025-05-23 00:56:13.499610 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-23 00:56:13.499615 | orchestrator | Friday 23 May 2025 00:50:53 +0000 (0:00:00.749) 0:07:39.128 ************ 2025-05-23 00:56:13.499624 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-23 00:56:13.499629 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.499634 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-23 00:56:13.499640 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.499645 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-23 00:56:13.499650 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.499655 | orchestrator | 2025-05-23 00:56:13.499661 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-23 00:56:13.499666 | orchestrator | Friday 23 May 2025 00:50:54 +0000 (0:00:00.539) 0:07:39.668 ************ 2025-05-23 00:56:13.499672 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.499677 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.499682 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.499687 | orchestrator | 2025-05-23 00:56:13.499693 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-23 00:56:13.499698 | orchestrator | Friday 23 May 2025 00:50:54 +0000 (0:00:00.687) 0:07:40.356 ************ 2025-05-23 00:56:13.499703 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.499709 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.499714 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.499719 | orchestrator | 2025-05-23 00:56:13.499725 | orchestrator | TASK [ceph-osd : set_fact add_osd] ********************************************* 2025-05-23 00:56:13.499730 | orchestrator | Friday 23 May 2025 00:50:55 +0000 (0:00:00.524) 0:07:40.880 ************ 2025-05-23 00:56:13.499735 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.499741 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.499746 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.499751 | orchestrator | 2025-05-23 00:56:13.499757 | orchestrator | TASK [ceph-osd : set_fact container_exec_cmd] ********************************** 2025-05-23 00:56:13.499762 | orchestrator | Friday 23 May 2025 00:50:55 +0000 (0:00:00.448) 0:07:41.328 ************ 2025-05-23 00:56:13.499767 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-23 00:56:13.499772 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-23 00:56:13.499778 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-23 00:56:13.499783 | orchestrator | 2025-05-23 00:56:13.499789 | orchestrator | TASK [ceph-osd : include_tasks system_tuning.yml] ****************************** 2025-05-23 00:56:13.499794 | orchestrator | Friday 23 May 2025 00:50:56 +0000 (0:00:00.605) 0:07:41.934 ************ 2025-05-23 00:56:13.499799 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 00:56:13.499805 | orchestrator | 2025-05-23 00:56:13.499810 | orchestrator | TASK [ceph-osd : disable osd directory parsing by updatedb] ******************** 2025-05-23 00:56:13.499815 | orchestrator | Friday 23 May 2025 00:50:56 +0000 (0:00:00.472) 0:07:42.406 ************ 2025-05-23 00:56:13.499820 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.499826 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.499831 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.499842 | orchestrator | 2025-05-23 00:56:13.499847 | orchestrator | TASK [ceph-osd : disable osd directory path in updatedb.conf] ****************** 2025-05-23 00:56:13.499852 | orchestrator | Friday 23 May 2025 00:50:57 +0000 (0:00:00.279) 0:07:42.686 ************ 2025-05-23 00:56:13.499858 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.499863 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.499868 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.499873 | orchestrator | 2025-05-23 00:56:13.499879 | orchestrator | TASK [ceph-osd : create tmpfiles.d directory] ********************************** 2025-05-23 00:56:13.499884 | orchestrator | Friday 23 May 2025 00:50:57 +0000 (0:00:00.438) 0:07:43.124 ************ 2025-05-23 00:56:13.499889 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.499895 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.499900 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.499905 | orchestrator | 2025-05-23 00:56:13.499910 | orchestrator | TASK [ceph-osd : disable transparent hugepage] ********************************* 2025-05-23 00:56:13.499916 | orchestrator | Friday 23 May 2025 00:50:57 +0000 (0:00:00.273) 0:07:43.397 ************ 2025-05-23 00:56:13.499921 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.499926 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.499932 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.499937 | orchestrator | 2025-05-23 00:56:13.499942 | orchestrator | TASK [ceph-osd : get default vm.min_free_kbytes] ******************************* 2025-05-23 00:56:13.499948 | orchestrator | Friday 23 May 2025 00:50:58 +0000 (0:00:00.301) 0:07:43.699 ************ 2025-05-23 00:56:13.499953 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.499958 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.499964 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.499969 | orchestrator | 2025-05-23 00:56:13.499974 | orchestrator | TASK [ceph-osd : set_fact vm_min_free_kbytes] ********************************** 2025-05-23 00:56:13.499979 | orchestrator | Friday 23 May 2025 00:50:58 +0000 (0:00:00.612) 0:07:44.311 ************ 2025-05-23 00:56:13.499999 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.500005 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.500011 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.500016 | orchestrator | 2025-05-23 00:56:13.500021 | orchestrator | TASK [ceph-osd : apply operating system tuning] ******************************** 2025-05-23 00:56:13.500027 | orchestrator | Friday 23 May 2025 00:50:59 +0000 (0:00:00.721) 0:07:45.033 ************ 2025-05-23 00:56:13.500032 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-23 00:56:13.500038 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-23 00:56:13.500043 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-23 00:56:13.500049 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-23 00:56:13.500054 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-23 00:56:13.500062 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-23 00:56:13.500068 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-23 00:56:13.500073 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-23 00:56:13.500081 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-23 00:56:13.500089 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-23 00:56:13.500098 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-23 00:56:13.500106 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-23 00:56:13.500115 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-23 00:56:13.500129 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-23 00:56:13.500139 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-23 00:56:13.500144 | orchestrator | 2025-05-23 00:56:13.500150 | orchestrator | TASK [ceph-osd : install dependencies] ***************************************** 2025-05-23 00:56:13.500155 | orchestrator | Friday 23 May 2025 00:51:02 +0000 (0:00:03.075) 0:07:48.109 ************ 2025-05-23 00:56:13.500160 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.500165 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.500171 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.500176 | orchestrator | 2025-05-23 00:56:13.500181 | orchestrator | TASK [ceph-osd : include_tasks common.yml] ************************************* 2025-05-23 00:56:13.500186 | orchestrator | Friday 23 May 2025 00:51:02 +0000 (0:00:00.308) 0:07:48.417 ************ 2025-05-23 00:56:13.500192 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 00:56:13.500197 | orchestrator | 2025-05-23 00:56:13.500202 | orchestrator | TASK [ceph-osd : create bootstrap-osd and osd directories] ********************* 2025-05-23 00:56:13.500211 | orchestrator | Friday 23 May 2025 00:51:03 +0000 (0:00:00.613) 0:07:49.030 ************ 2025-05-23 00:56:13.500216 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-23 00:56:13.500221 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-23 00:56:13.500227 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-23 00:56:13.500232 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-05-23 00:56:13.500237 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-05-23 00:56:13.500242 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-05-23 00:56:13.500248 | orchestrator | 2025-05-23 00:56:13.500253 | orchestrator | TASK [ceph-osd : get keys from monitors] *************************************** 2025-05-23 00:56:13.500258 | orchestrator | Friday 23 May 2025 00:51:04 +0000 (0:00:00.965) 0:07:49.996 ************ 2025-05-23 00:56:13.500264 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-23 00:56:13.500269 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-23 00:56:13.500274 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-23 00:56:13.500279 | orchestrator | 2025-05-23 00:56:13.500285 | orchestrator | TASK [ceph-osd : copy ceph key(s) if needed] *********************************** 2025-05-23 00:56:13.500290 | orchestrator | Friday 23 May 2025 00:51:06 +0000 (0:00:01.712) 0:07:51.708 ************ 2025-05-23 00:56:13.500295 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-23 00:56:13.500301 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-23 00:56:13.500306 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:56:13.500311 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-23 00:56:13.500316 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-23 00:56:13.500322 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:56:13.500327 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-23 00:56:13.500332 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-23 00:56:13.500337 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:56:13.500343 | orchestrator | 2025-05-23 00:56:13.500348 | orchestrator | TASK [ceph-osd : set noup flag] ************************************************ 2025-05-23 00:56:13.500353 | orchestrator | Friday 23 May 2025 00:51:07 +0000 (0:00:01.287) 0:07:52.996 ************ 2025-05-23 00:56:13.500358 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-23 00:56:13.500364 | orchestrator | 2025-05-23 00:56:13.500369 | orchestrator | TASK [ceph-osd : include container_options_facts.yml] ************************** 2025-05-23 00:56:13.500411 | orchestrator | Friday 23 May 2025 00:51:09 +0000 (0:00:02.262) 0:07:55.259 ************ 2025-05-23 00:56:13.500422 | orchestrator | included: /ansible/roles/ceph-osd/tasks/container_options_facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 00:56:13.500432 | orchestrator | 2025-05-23 00:56:13.500437 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=0'] *** 2025-05-23 00:56:13.500443 | orchestrator | Friday 23 May 2025 00:51:10 +0000 (0:00:00.600) 0:07:55.859 ************ 2025-05-23 00:56:13.500448 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.500453 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.500458 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.500464 | orchestrator | 2025-05-23 00:56:13.500469 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=1'] *** 2025-05-23 00:56:13.500475 | orchestrator | Friday 23 May 2025 00:51:10 +0000 (0:00:00.598) 0:07:56.457 ************ 2025-05-23 00:56:13.500480 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.500485 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.500490 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.500496 | orchestrator | 2025-05-23 00:56:13.500504 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=0'] *** 2025-05-23 00:56:13.500510 | orchestrator | Friday 23 May 2025 00:51:11 +0000 (0:00:00.340) 0:07:56.797 ************ 2025-05-23 00:56:13.500515 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.500520 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.500526 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.500531 | orchestrator | 2025-05-23 00:56:13.500536 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=1'] *** 2025-05-23 00:56:13.500541 | orchestrator | Friday 23 May 2025 00:51:11 +0000 (0:00:00.349) 0:07:57.147 ************ 2025-05-23 00:56:13.500547 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.500552 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.500558 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.500563 | orchestrator | 2025-05-23 00:56:13.500568 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm.yml] ****************************** 2025-05-23 00:56:13.500574 | orchestrator | Friday 23 May 2025 00:51:12 +0000 (0:00:00.357) 0:07:57.505 ************ 2025-05-23 00:56:13.500579 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 00:56:13.500584 | orchestrator | 2025-05-23 00:56:13.500590 | orchestrator | TASK [ceph-osd : use ceph-volume to create bluestore osds] ********************* 2025-05-23 00:56:13.500595 | orchestrator | Friday 23 May 2025 00:51:13 +0000 (0:00:01.024) 0:07:58.529 ************ 2025-05-23 00:56:13.500600 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-17b95678-9240-5166-938b-e89fe6559568', 'data_vg': 'ceph-17b95678-9240-5166-938b-e89fe6559568'}) 2025-05-23 00:56:13.500606 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-125adf16-eac9-5ada-96e7-bcd4f30a545d', 'data_vg': 'ceph-125adf16-eac9-5ada-96e7-bcd4f30a545d'}) 2025-05-23 00:56:13.500612 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-1c1d7620-81eb-54f7-8ffb-e9df7a8995e0', 'data_vg': 'ceph-1c1d7620-81eb-54f7-8ffb-e9df7a8995e0'}) 2025-05-23 00:56:13.500617 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0', 'data_vg': 'ceph-8fe28d0c-4762-50fd-9b7b-6f1bb47ff5c0'}) 2025-05-23 00:56:13.500623 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-dafe69f8-630b-5486-ba76-590e0b4d1820', 'data_vg': 'ceph-dafe69f8-630b-5486-ba76-590e0b4d1820'}) 2025-05-23 00:56:13.500628 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8bf3a31b-2d76-5988-bbd2-6800630d4c9a', 'data_vg': 'ceph-8bf3a31b-2d76-5988-bbd2-6800630d4c9a'}) 2025-05-23 00:56:13.500633 | orchestrator | 2025-05-23 00:56:13.500639 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm-batch.yml] ************************ 2025-05-23 00:56:13.500644 | orchestrator | Friday 23 May 2025 00:51:52 +0000 (0:00:39.455) 0:08:37.985 ************ 2025-05-23 00:56:13.500649 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.500655 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.500663 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.500669 | orchestrator | 2025-05-23 00:56:13.500674 | orchestrator | TASK [ceph-osd : include_tasks start_osds.yml] ********************************* 2025-05-23 00:56:13.500679 | orchestrator | Friday 23 May 2025 00:51:53 +0000 (0:00:00.513) 0:08:38.498 ************ 2025-05-23 00:56:13.500685 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 00:56:13.500690 | orchestrator | 2025-05-23 00:56:13.500695 | orchestrator | TASK [ceph-osd : get osd ids] ************************************************** 2025-05-23 00:56:13.500701 | orchestrator | Friday 23 May 2025 00:51:53 +0000 (0:00:00.549) 0:08:39.048 ************ 2025-05-23 00:56:13.500706 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.500711 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.500717 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.500722 | orchestrator | 2025-05-23 00:56:13.500727 | orchestrator | TASK [ceph-osd : collect osd ids] ********************************************** 2025-05-23 00:56:13.500733 | orchestrator | Friday 23 May 2025 00:51:54 +0000 (0:00:00.656) 0:08:39.704 ************ 2025-05-23 00:56:13.500738 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:56:13.500743 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:56:13.500749 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:56:13.500754 | orchestrator | 2025-05-23 00:56:13.500760 | orchestrator | TASK [ceph-osd : include_tasks systemd.yml] ************************************ 2025-05-23 00:56:13.500765 | orchestrator | Friday 23 May 2025 00:51:55 +0000 (0:00:01.683) 0:08:41.388 ************ 2025-05-23 00:56:13.500787 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 00:56:13.500793 | orchestrator | 2025-05-23 00:56:13.500799 | orchestrator | TASK [ceph-osd : generate systemd unit file] *********************************** 2025-05-23 00:56:13.500805 | orchestrator | Friday 23 May 2025 00:51:56 +0000 (0:00:00.515) 0:08:41.904 ************ 2025-05-23 00:56:13.500810 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:56:13.500815 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:56:13.500821 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:56:13.500826 | orchestrator | 2025-05-23 00:56:13.500832 | orchestrator | TASK [ceph-osd : generate systemd ceph-osd target file] ************************ 2025-05-23 00:56:13.500837 | orchestrator | Friday 23 May 2025 00:51:57 +0000 (0:00:01.193) 0:08:43.097 ************ 2025-05-23 00:56:13.500842 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:56:13.500848 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:56:13.500853 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:56:13.500858 | orchestrator | 2025-05-23 00:56:13.500864 | orchestrator | TASK [ceph-osd : enable ceph-osd.target] *************************************** 2025-05-23 00:56:13.500869 | orchestrator | Friday 23 May 2025 00:51:59 +0000 (0:00:01.398) 0:08:44.496 ************ 2025-05-23 00:56:13.500878 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:56:13.500883 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:56:13.500889 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:56:13.500894 | orchestrator | 2025-05-23 00:56:13.500899 | orchestrator | TASK [ceph-osd : ensure systemd service override directory exists] ************* 2025-05-23 00:56:13.500905 | orchestrator | Friday 23 May 2025 00:52:00 +0000 (0:00:01.677) 0:08:46.173 ************ 2025-05-23 00:56:13.500910 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.500916 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.500921 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.500926 | orchestrator | 2025-05-23 00:56:13.500932 | orchestrator | TASK [ceph-osd : add ceph-osd systemd service overrides] *********************** 2025-05-23 00:56:13.500937 | orchestrator | Friday 23 May 2025 00:52:01 +0000 (0:00:00.343) 0:08:46.517 ************ 2025-05-23 00:56:13.500943 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.500948 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.500953 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.500959 | orchestrator | 2025-05-23 00:56:13.500964 | orchestrator | TASK [ceph-osd : ensure "/var/lib/ceph/osd/{{ cluster }}-{{ item }}" is present] *** 2025-05-23 00:56:13.500973 | orchestrator | Friday 23 May 2025 00:52:01 +0000 (0:00:00.582) 0:08:47.099 ************ 2025-05-23 00:56:13.500978 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-23 00:56:13.500984 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-05-23 00:56:13.500989 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-05-23 00:56:13.500995 | orchestrator | ok: [testbed-node-3] => (item=3) 2025-05-23 00:56:13.501000 | orchestrator | ok: [testbed-node-4] => (item=4) 2025-05-23 00:56:13.501005 | orchestrator | ok: [testbed-node-5] => (item=5) 2025-05-23 00:56:13.501011 | orchestrator | 2025-05-23 00:56:13.501016 | orchestrator | TASK [ceph-osd : systemd start osd] ******************************************** 2025-05-23 00:56:13.501021 | orchestrator | Friday 23 May 2025 00:52:02 +0000 (0:00:01.004) 0:08:48.103 ************ 2025-05-23 00:56:13.501027 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-05-23 00:56:13.501032 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-05-23 00:56:13.501038 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-05-23 00:56:13.501043 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-05-23 00:56:13.501048 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-05-23 00:56:13.501054 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-05-23 00:56:13.501059 | orchestrator | 2025-05-23 00:56:13.501064 | orchestrator | TASK [ceph-osd : unset noup flag] ********************************************** 2025-05-23 00:56:13.501070 | orchestrator | Friday 23 May 2025 00:52:05 +0000 (0:00:03.336) 0:08:51.440 ************ 2025-05-23 00:56:13.501075 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.501080 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.501086 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-23 00:56:13.501091 | orchestrator | 2025-05-23 00:56:13.501096 | orchestrator | TASK [ceph-osd : wait for all osd to be up] ************************************ 2025-05-23 00:56:13.501102 | orchestrator | Friday 23 May 2025 00:52:08 +0000 (0:00:02.150) 0:08:53.591 ************ 2025-05-23 00:56:13.501107 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.501112 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.501118 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: wait for all osd to be up (60 retries left). 2025-05-23 00:56:13.501123 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-23 00:56:13.501129 | orchestrator | 2025-05-23 00:56:13.501134 | orchestrator | TASK [ceph-osd : include crush_rules.yml] ************************************** 2025-05-23 00:56:13.501139 | orchestrator | Friday 23 May 2025 00:52:20 +0000 (0:00:12.557) 0:09:06.148 ************ 2025-05-23 00:56:13.501145 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.501150 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.501155 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.501161 | orchestrator | 2025-05-23 00:56:13.501166 | orchestrator | TASK [ceph-osd : include openstack_config.yml] ********************************* 2025-05-23 00:56:13.501172 | orchestrator | Friday 23 May 2025 00:52:21 +0000 (0:00:00.471) 0:09:06.620 ************ 2025-05-23 00:56:13.501177 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.501182 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.501188 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.501193 | orchestrator | 2025-05-23 00:56:13.501198 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-23 00:56:13.501204 | orchestrator | Friday 23 May 2025 00:52:22 +0000 (0:00:01.095) 0:09:07.716 ************ 2025-05-23 00:56:13.501209 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:56:13.501214 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:56:13.501220 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:56:13.501225 | orchestrator | 2025-05-23 00:56:13.501230 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-05-23 00:56:13.501236 | orchestrator | Friday 23 May 2025 00:52:22 +0000 (0:00:00.686) 0:09:08.402 ************ 2025-05-23 00:56:13.501257 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 00:56:13.501266 | orchestrator | 2025-05-23 00:56:13.501272 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact trigger_restart] ********************** 2025-05-23 00:56:13.501277 | orchestrator | Friday 23 May 2025 00:52:23 +0000 (0:00:00.794) 0:09:09.197 ************ 2025-05-23 00:56:13.501282 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-23 00:56:13.501288 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-23 00:56:13.501293 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-23 00:56:13.501298 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.501304 | orchestrator | 2025-05-23 00:56:13.501309 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called before restart] ******** 2025-05-23 00:56:13.501314 | orchestrator | Friday 23 May 2025 00:52:24 +0000 (0:00:00.445) 0:09:09.642 ************ 2025-05-23 00:56:13.501320 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.501325 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.501330 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.501335 | orchestrator | 2025-05-23 00:56:13.501343 | orchestrator | RUNNING HANDLER [ceph-handler : unset noup flag] ******************************* 2025-05-23 00:56:13.501349 | orchestrator | Friday 23 May 2025 00:52:24 +0000 (0:00:00.346) 0:09:09.988 ************ 2025-05-23 00:56:13.501354 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.501359 | orchestrator | 2025-05-23 00:56:13.501364 | orchestrator | RUNNING HANDLER [ceph-handler : copy osd restart script] *********************** 2025-05-23 00:56:13.501370 | orchestrator | Friday 23 May 2025 00:52:24 +0000 (0:00:00.224) 0:09:10.212 ************ 2025-05-23 00:56:13.501375 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.501380 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.501401 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.501406 | orchestrator | 2025-05-23 00:56:13.501412 | orchestrator | RUNNING HANDLER [ceph-handler : get pool list] ********************************* 2025-05-23 00:56:13.501417 | orchestrator | Friday 23 May 2025 00:52:25 +0000 (0:00:00.567) 0:09:10.780 ************ 2025-05-23 00:56:13.501422 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.501428 | orchestrator | 2025-05-23 00:56:13.501433 | orchestrator | RUNNING HANDLER [ceph-handler : get balancer module status] ******************** 2025-05-23 00:56:13.501438 | orchestrator | Friday 23 May 2025 00:52:25 +0000 (0:00:00.261) 0:09:11.042 ************ 2025-05-23 00:56:13.501444 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.501449 | orchestrator | 2025-05-23 00:56:13.501455 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-05-23 00:56:13.501460 | orchestrator | Friday 23 May 2025 00:52:25 +0000 (0:00:00.249) 0:09:11.291 ************ 2025-05-23 00:56:13.501465 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.501470 | orchestrator | 2025-05-23 00:56:13.501476 | orchestrator | RUNNING HANDLER [ceph-handler : disable balancer] ****************************** 2025-05-23 00:56:13.501481 | orchestrator | Friday 23 May 2025 00:52:25 +0000 (0:00:00.149) 0:09:11.441 ************ 2025-05-23 00:56:13.501486 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.501492 | orchestrator | 2025-05-23 00:56:13.501497 | orchestrator | RUNNING HANDLER [ceph-handler : disable pg autoscale on pools] ***************** 2025-05-23 00:56:13.501502 | orchestrator | Friday 23 May 2025 00:52:26 +0000 (0:00:00.239) 0:09:11.680 ************ 2025-05-23 00:56:13.501508 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.501513 | orchestrator | 2025-05-23 00:56:13.501518 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph osds daemon(s)] ******************* 2025-05-23 00:56:13.501523 | orchestrator | Friday 23 May 2025 00:52:26 +0000 (0:00:00.243) 0:09:11.923 ************ 2025-05-23 00:56:13.501529 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-23 00:56:13.501534 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-23 00:56:13.501539 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-23 00:56:13.501545 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.501550 | orchestrator | 2025-05-23 00:56:13.501560 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called after restart] ********* 2025-05-23 00:56:13.501566 | orchestrator | Friday 23 May 2025 00:52:26 +0000 (0:00:00.487) 0:09:12.411 ************ 2025-05-23 00:56:13.501571 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.501576 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.501581 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.501587 | orchestrator | 2025-05-23 00:56:13.501592 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable pg autoscale on pools] *************** 2025-05-23 00:56:13.501597 | orchestrator | Friday 23 May 2025 00:52:27 +0000 (0:00:00.404) 0:09:12.816 ************ 2025-05-23 00:56:13.501603 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.501608 | orchestrator | 2025-05-23 00:56:13.501613 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable balancer] **************************** 2025-05-23 00:56:13.501619 | orchestrator | Friday 23 May 2025 00:52:27 +0000 (0:00:00.527) 0:09:13.344 ************ 2025-05-23 00:56:13.501624 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.501629 | orchestrator | 2025-05-23 00:56:13.501635 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-23 00:56:13.501640 | orchestrator | Friday 23 May 2025 00:52:28 +0000 (0:00:00.225) 0:09:13.569 ************ 2025-05-23 00:56:13.501645 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:56:13.501651 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:56:13.501656 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:56:13.501661 | orchestrator | 2025-05-23 00:56:13.501667 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-05-23 00:56:13.501672 | orchestrator | 2025-05-23 00:56:13.501677 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-23 00:56:13.501683 | orchestrator | Friday 23 May 2025 00:52:30 +0000 (0:00:02.808) 0:09:16.378 ************ 2025-05-23 00:56:13.501688 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 00:56:13.501694 | orchestrator | 2025-05-23 00:56:13.501699 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-23 00:56:13.501721 | orchestrator | Friday 23 May 2025 00:52:32 +0000 (0:00:01.215) 0:09:17.594 ************ 2025-05-23 00:56:13.501728 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.501733 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.501739 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.501744 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.501750 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.501755 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.501760 | orchestrator | 2025-05-23 00:56:13.501766 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-23 00:56:13.501771 | orchestrator | Friday 23 May 2025 00:52:32 +0000 (0:00:00.700) 0:09:18.295 ************ 2025-05-23 00:56:13.501776 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.501782 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.501787 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.501792 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.501797 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.501803 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.501808 | orchestrator | 2025-05-23 00:56:13.501813 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-23 00:56:13.501821 | orchestrator | Friday 23 May 2025 00:52:34 +0000 (0:00:01.235) 0:09:19.530 ************ 2025-05-23 00:56:13.501827 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.501832 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.501837 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.501842 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.501848 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.501853 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.501858 | orchestrator | 2025-05-23 00:56:13.501864 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-23 00:56:13.501873 | orchestrator | Friday 23 May 2025 00:52:35 +0000 (0:00:01.004) 0:09:20.535 ************ 2025-05-23 00:56:13.501878 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.501883 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.501889 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.501894 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.501899 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.501905 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.501910 | orchestrator | 2025-05-23 00:56:13.501915 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-23 00:56:13.501920 | orchestrator | Friday 23 May 2025 00:52:36 +0000 (0:00:01.266) 0:09:21.802 ************ 2025-05-23 00:56:13.501926 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.501931 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.501936 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.501941 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.501947 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.501952 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.501957 | orchestrator | 2025-05-23 00:56:13.501963 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-23 00:56:13.501968 | orchestrator | Friday 23 May 2025 00:52:37 +0000 (0:00:00.755) 0:09:22.557 ************ 2025-05-23 00:56:13.501973 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.501978 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.501984 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.501989 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.501994 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.501999 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.502005 | orchestrator | 2025-05-23 00:56:13.502010 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-23 00:56:13.502034 | orchestrator | Friday 23 May 2025 00:52:37 +0000 (0:00:00.897) 0:09:23.455 ************ 2025-05-23 00:56:13.502041 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.502046 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.502052 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.502057 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.502062 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.502067 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.502073 | orchestrator | 2025-05-23 00:56:13.502078 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-23 00:56:13.502083 | orchestrator | Friday 23 May 2025 00:52:38 +0000 (0:00:00.605) 0:09:24.061 ************ 2025-05-23 00:56:13.502089 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.502094 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.502099 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.502104 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.502110 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.502115 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.502120 | orchestrator | 2025-05-23 00:56:13.502125 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-23 00:56:13.502131 | orchestrator | Friday 23 May 2025 00:52:39 +0000 (0:00:00.823) 0:09:24.884 ************ 2025-05-23 00:56:13.502136 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.502141 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.502146 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.502152 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.502157 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.502162 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.502167 | orchestrator | 2025-05-23 00:56:13.502173 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-23 00:56:13.502178 | orchestrator | Friday 23 May 2025 00:52:39 +0000 (0:00:00.556) 0:09:25.441 ************ 2025-05-23 00:56:13.502183 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.502192 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.502198 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.502203 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.502208 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.502213 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.502218 | orchestrator | 2025-05-23 00:56:13.502224 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-23 00:56:13.502229 | orchestrator | Friday 23 May 2025 00:52:40 +0000 (0:00:00.771) 0:09:26.212 ************ 2025-05-23 00:56:13.502234 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.502240 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.502245 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.502250 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.502256 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.502261 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.502266 | orchestrator | 2025-05-23 00:56:13.502287 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-23 00:56:13.502294 | orchestrator | Friday 23 May 2025 00:52:41 +0000 (0:00:01.021) 0:09:27.234 ************ 2025-05-23 00:56:13.502299 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.502305 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.502310 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.502315 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.502320 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.502326 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.502331 | orchestrator | 2025-05-23 00:56:13.502336 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-23 00:56:13.502342 | orchestrator | Friday 23 May 2025 00:52:42 +0000 (0:00:00.720) 0:09:27.954 ************ 2025-05-23 00:56:13.502347 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.502352 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.502358 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.502363 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.502368 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.502374 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.502379 | orchestrator | 2025-05-23 00:56:13.502403 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-23 00:56:13.502413 | orchestrator | Friday 23 May 2025 00:52:43 +0000 (0:00:00.621) 0:09:28.576 ************ 2025-05-23 00:56:13.502423 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.502431 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.502440 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.502450 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.502455 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.502461 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.502466 | orchestrator | 2025-05-23 00:56:13.502471 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-23 00:56:13.502477 | orchestrator | Friday 23 May 2025 00:52:43 +0000 (0:00:00.793) 0:09:29.369 ************ 2025-05-23 00:56:13.502482 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.502487 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.502492 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.502498 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.502503 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.502508 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.502513 | orchestrator | 2025-05-23 00:56:13.502519 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-23 00:56:13.502524 | orchestrator | Friday 23 May 2025 00:52:44 +0000 (0:00:00.728) 0:09:30.098 ************ 2025-05-23 00:56:13.502529 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.502535 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.502540 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.502545 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.502550 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.502562 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.502567 | orchestrator | 2025-05-23 00:56:13.502573 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-23 00:56:13.502578 | orchestrator | Friday 23 May 2025 00:52:45 +0000 (0:00:00.826) 0:09:30.924 ************ 2025-05-23 00:56:13.502583 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.502588 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.502594 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.502599 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.502604 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.502609 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.502615 | orchestrator | 2025-05-23 00:56:13.502620 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-23 00:56:13.502625 | orchestrator | Friday 23 May 2025 00:52:46 +0000 (0:00:00.568) 0:09:31.493 ************ 2025-05-23 00:56:13.502631 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.502636 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.502641 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.502647 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.502652 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.502657 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.502663 | orchestrator | 2025-05-23 00:56:13.502668 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-23 00:56:13.502673 | orchestrator | Friday 23 May 2025 00:52:46 +0000 (0:00:00.733) 0:09:32.226 ************ 2025-05-23 00:56:13.502678 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.502684 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.502689 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.502694 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.502700 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.502705 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.502710 | orchestrator | 2025-05-23 00:56:13.502715 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-23 00:56:13.502721 | orchestrator | Friday 23 May 2025 00:52:47 +0000 (0:00:00.576) 0:09:32.803 ************ 2025-05-23 00:56:13.502726 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.502731 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.502737 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.502742 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.502747 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.502752 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.502757 | orchestrator | 2025-05-23 00:56:13.502763 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-23 00:56:13.502768 | orchestrator | Friday 23 May 2025 00:52:48 +0000 (0:00:00.819) 0:09:33.622 ************ 2025-05-23 00:56:13.502773 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.502779 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.502784 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.502789 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.502794 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.502800 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.502805 | orchestrator | 2025-05-23 00:56:13.502810 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-23 00:56:13.502815 | orchestrator | Friday 23 May 2025 00:52:48 +0000 (0:00:00.583) 0:09:34.206 ************ 2025-05-23 00:56:13.502821 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.502826 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.502831 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.502836 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.502842 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.502865 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.502871 | orchestrator | 2025-05-23 00:56:13.502876 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-23 00:56:13.502882 | orchestrator | Friday 23 May 2025 00:52:49 +0000 (0:00:00.730) 0:09:34.936 ************ 2025-05-23 00:56:13.502890 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.502896 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.502901 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.502906 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.502911 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.502917 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.502922 | orchestrator | 2025-05-23 00:56:13.502927 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-23 00:56:13.502933 | orchestrator | Friday 23 May 2025 00:52:49 +0000 (0:00:00.541) 0:09:35.478 ************ 2025-05-23 00:56:13.502938 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.502943 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.502948 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.502955 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.502965 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.502971 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.502978 | orchestrator | 2025-05-23 00:56:13.502984 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-23 00:56:13.502991 | orchestrator | Friday 23 May 2025 00:52:50 +0000 (0:00:00.760) 0:09:36.238 ************ 2025-05-23 00:56:13.502997 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.503004 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.503010 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.503017 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.503023 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.503029 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.503036 | orchestrator | 2025-05-23 00:56:13.503042 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-23 00:56:13.503049 | orchestrator | Friday 23 May 2025 00:52:51 +0000 (0:00:00.667) 0:09:36.906 ************ 2025-05-23 00:56:13.503055 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.503062 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.503068 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.503075 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.503081 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.503088 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.503094 | orchestrator | 2025-05-23 00:56:13.503101 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-23 00:56:13.503107 | orchestrator | Friday 23 May 2025 00:52:52 +0000 (0:00:00.861) 0:09:37.767 ************ 2025-05-23 00:56:13.503114 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.503120 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.503127 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.503133 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.503139 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.503146 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.503152 | orchestrator | 2025-05-23 00:56:13.503159 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-23 00:56:13.503165 | orchestrator | Friday 23 May 2025 00:52:53 +0000 (0:00:00.887) 0:09:38.654 ************ 2025-05-23 00:56:13.503172 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.503178 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.503185 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.503191 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.503198 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.503204 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.503210 | orchestrator | 2025-05-23 00:56:13.503217 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-23 00:56:13.503224 | orchestrator | Friday 23 May 2025 00:52:54 +0000 (0:00:01.055) 0:09:39.710 ************ 2025-05-23 00:56:13.503230 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.503244 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.503250 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.503257 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.503263 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.503269 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.503276 | orchestrator | 2025-05-23 00:56:13.503282 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-23 00:56:13.503289 | orchestrator | Friday 23 May 2025 00:52:55 +0000 (0:00:00.801) 0:09:40.512 ************ 2025-05-23 00:56:13.503296 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.503302 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.503309 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.503315 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.503321 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.503328 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.503335 | orchestrator | 2025-05-23 00:56:13.503341 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-23 00:56:13.503347 | orchestrator | Friday 23 May 2025 00:52:56 +0000 (0:00:01.148) 0:09:41.660 ************ 2025-05-23 00:56:13.503354 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.503360 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.503367 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.503373 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.503380 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.503426 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.503433 | orchestrator | 2025-05-23 00:56:13.503440 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-23 00:56:13.503447 | orchestrator | Friday 23 May 2025 00:52:56 +0000 (0:00:00.651) 0:09:42.311 ************ 2025-05-23 00:56:13.503453 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.503460 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.503466 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.503472 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.503479 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.503485 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.503492 | orchestrator | 2025-05-23 00:56:13.503520 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-23 00:56:13.503527 | orchestrator | Friday 23 May 2025 00:52:57 +0000 (0:00:01.054) 0:09:43.366 ************ 2025-05-23 00:56:13.503534 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-23 00:56:13.503541 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-23 00:56:13.503547 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.503553 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-23 00:56:13.503560 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-23 00:56:13.503567 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.503573 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-23 00:56:13.503579 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-23 00:56:13.503586 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.503592 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-23 00:56:13.503599 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-23 00:56:13.503605 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.503612 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-23 00:56:13.503618 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-23 00:56:13.503629 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.503635 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-23 00:56:13.503642 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-23 00:56:13.503648 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.503655 | orchestrator | 2025-05-23 00:56:13.503661 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-23 00:56:13.503673 | orchestrator | Friday 23 May 2025 00:52:58 +0000 (0:00:00.805) 0:09:44.172 ************ 2025-05-23 00:56:13.503679 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-23 00:56:13.503686 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-23 00:56:13.503692 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.503699 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-23 00:56:13.503706 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-23 00:56:13.503712 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.503719 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-23 00:56:13.503725 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-23 00:56:13.503732 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.503739 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-23 00:56:13.503745 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-23 00:56:13.503752 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.503758 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-23 00:56:13.503765 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-23 00:56:13.503771 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.503778 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-23 00:56:13.503784 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-23 00:56:13.503791 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.503797 | orchestrator | 2025-05-23 00:56:13.503803 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-23 00:56:13.503809 | orchestrator | Friday 23 May 2025 00:52:59 +0000 (0:00:01.222) 0:09:45.394 ************ 2025-05-23 00:56:13.503815 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.503821 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.503827 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.503833 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.503839 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.503846 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.503852 | orchestrator | 2025-05-23 00:56:13.503858 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-23 00:56:13.503864 | orchestrator | Friday 23 May 2025 00:53:00 +0000 (0:00:00.816) 0:09:46.210 ************ 2025-05-23 00:56:13.503870 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.503876 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.503882 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.503888 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.503894 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.503900 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.503906 | orchestrator | 2025-05-23 00:56:13.503912 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-23 00:56:13.503919 | orchestrator | Friday 23 May 2025 00:53:01 +0000 (0:00:00.917) 0:09:47.127 ************ 2025-05-23 00:56:13.503925 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.503931 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.503937 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.503943 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.503949 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.503955 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.503961 | orchestrator | 2025-05-23 00:56:13.503967 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-23 00:56:13.503973 | orchestrator | Friday 23 May 2025 00:53:02 +0000 (0:00:00.618) 0:09:47.746 ************ 2025-05-23 00:56:13.503980 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.503985 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.503991 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.504001 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.504007 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.504013 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.504019 | orchestrator | 2025-05-23 00:56:13.504025 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-23 00:56:13.504031 | orchestrator | Friday 23 May 2025 00:53:03 +0000 (0:00:00.826) 0:09:48.573 ************ 2025-05-23 00:56:13.504038 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.504043 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.504050 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.504056 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.504078 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.504085 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.504092 | orchestrator | 2025-05-23 00:56:13.504098 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-23 00:56:13.504104 | orchestrator | Friday 23 May 2025 00:53:03 +0000 (0:00:00.591) 0:09:49.165 ************ 2025-05-23 00:56:13.504110 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.504116 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.504122 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.504128 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.504134 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.504140 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.504146 | orchestrator | 2025-05-23 00:56:13.504152 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-23 00:56:13.504158 | orchestrator | Friday 23 May 2025 00:53:04 +0000 (0:00:00.789) 0:09:49.955 ************ 2025-05-23 00:56:13.504164 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-23 00:56:13.504170 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-23 00:56:13.504180 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-23 00:56:13.504186 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.504192 | orchestrator | 2025-05-23 00:56:13.504198 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-23 00:56:13.504217 | orchestrator | Friday 23 May 2025 00:53:04 +0000 (0:00:00.409) 0:09:50.364 ************ 2025-05-23 00:56:13.504223 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-23 00:56:13.504230 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-23 00:56:13.504236 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-23 00:56:13.504242 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.504248 | orchestrator | 2025-05-23 00:56:13.504254 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-23 00:56:13.504260 | orchestrator | Friday 23 May 2025 00:53:05 +0000 (0:00:00.473) 0:09:50.838 ************ 2025-05-23 00:56:13.504266 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-23 00:56:13.504272 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-23 00:56:13.504278 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-23 00:56:13.504284 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.504290 | orchestrator | 2025-05-23 00:56:13.504296 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-23 00:56:13.504302 | orchestrator | Friday 23 May 2025 00:53:05 +0000 (0:00:00.418) 0:09:51.257 ************ 2025-05-23 00:56:13.504308 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.504315 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.504321 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.504327 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.504333 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.504339 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.504345 | orchestrator | 2025-05-23 00:56:13.504351 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-23 00:56:13.504361 | orchestrator | Friday 23 May 2025 00:53:06 +0000 (0:00:00.935) 0:09:52.193 ************ 2025-05-23 00:56:13.504367 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-23 00:56:13.504373 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-23 00:56:13.504379 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.504402 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-23 00:56:13.504408 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.504415 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.504421 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-23 00:56:13.504427 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.504433 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-23 00:56:13.504439 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.504445 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-23 00:56:13.504451 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.504457 | orchestrator | 2025-05-23 00:56:13.504463 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-23 00:56:13.504469 | orchestrator | Friday 23 May 2025 00:53:07 +0000 (0:00:00.772) 0:09:52.966 ************ 2025-05-23 00:56:13.504475 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.504481 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.504487 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.504493 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.504499 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.504505 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.504511 | orchestrator | 2025-05-23 00:56:13.504517 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-23 00:56:13.504523 | orchestrator | Friday 23 May 2025 00:53:08 +0000 (0:00:00.704) 0:09:53.670 ************ 2025-05-23 00:56:13.504530 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.504536 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.504542 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.504548 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.504554 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.504560 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.504566 | orchestrator | 2025-05-23 00:56:13.504572 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-23 00:56:13.504578 | orchestrator | Friday 23 May 2025 00:53:08 +0000 (0:00:00.739) 0:09:54.410 ************ 2025-05-23 00:56:13.504584 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-23 00:56:13.504590 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.504596 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-23 00:56:13.504602 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.504608 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-23 00:56:13.504614 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.504620 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-23 00:56:13.504626 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.504632 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-23 00:56:13.504638 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.504664 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-23 00:56:13.504671 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.504678 | orchestrator | 2025-05-23 00:56:13.504684 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-23 00:56:13.504690 | orchestrator | Friday 23 May 2025 00:53:10 +0000 (0:00:01.275) 0:09:55.685 ************ 2025-05-23 00:56:13.504696 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.504702 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.504708 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.504714 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-23 00:56:13.504720 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.504730 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-23 00:56:13.504737 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.504746 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-23 00:56:13.504752 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.504759 | orchestrator | 2025-05-23 00:56:13.504765 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-23 00:56:13.504771 | orchestrator | Friday 23 May 2025 00:53:10 +0000 (0:00:00.720) 0:09:56.405 ************ 2025-05-23 00:56:13.504777 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-23 00:56:13.504783 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-23 00:56:13.504789 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-23 00:56:13.504795 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.504801 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-23 00:56:13.504807 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-23 00:56:13.504813 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-23 00:56:13.504819 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-23 00:56:13.504825 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-23 00:56:13.504831 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-23 00:56:13.504838 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.504844 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-23 00:56:13.504850 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-23 00:56:13.504856 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-23 00:56:13.504862 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.504868 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-23 00:56:13.504874 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-23 00:56:13.504880 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-23 00:56:13.504886 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.504892 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.504898 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-23 00:56:13.504904 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-23 00:56:13.504910 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-23 00:56:13.504917 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.504923 | orchestrator | 2025-05-23 00:56:13.504929 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-23 00:56:13.504935 | orchestrator | Friday 23 May 2025 00:53:12 +0000 (0:00:01.459) 0:09:57.865 ************ 2025-05-23 00:56:13.504941 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.504947 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.504953 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.504959 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.504965 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.504971 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.504977 | orchestrator | 2025-05-23 00:56:13.504983 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-23 00:56:13.504990 | orchestrator | Friday 23 May 2025 00:53:13 +0000 (0:00:01.092) 0:09:58.957 ************ 2025-05-23 00:56:13.504996 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.505002 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.505008 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-23 00:56:13.505014 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.505020 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-23 00:56:13.505030 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.505036 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.505042 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-23 00:56:13.505048 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.505054 | orchestrator | 2025-05-23 00:56:13.505060 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-23 00:56:13.505066 | orchestrator | Friday 23 May 2025 00:53:15 +0000 (0:00:01.850) 0:10:00.808 ************ 2025-05-23 00:56:13.505072 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.505078 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.505084 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.505090 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.505096 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.505102 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.505108 | orchestrator | 2025-05-23 00:56:13.505114 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-23 00:56:13.505120 | orchestrator | Friday 23 May 2025 00:53:16 +0000 (0:00:01.599) 0:10:02.408 ************ 2025-05-23 00:56:13.505127 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:56:13.505133 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:56:13.505139 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:56:13.505147 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.505154 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.505160 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.505166 | orchestrator | 2025-05-23 00:56:13.505172 | orchestrator | TASK [ceph-crash : create client.crash keyring] ******************************** 2025-05-23 00:56:13.505178 | orchestrator | Friday 23 May 2025 00:53:17 +0000 (0:00:00.951) 0:10:03.359 ************ 2025-05-23 00:56:13.505184 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:56:13.505190 | orchestrator | 2025-05-23 00:56:13.505196 | orchestrator | TASK [ceph-crash : get keys from monitors] ************************************* 2025-05-23 00:56:13.505202 | orchestrator | Friday 23 May 2025 00:53:21 +0000 (0:00:03.336) 0:10:06.695 ************ 2025-05-23 00:56:13.505208 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.505214 | orchestrator | 2025-05-23 00:56:13.505220 | orchestrator | TASK [ceph-crash : copy ceph key(s) if needed] ********************************* 2025-05-23 00:56:13.505226 | orchestrator | Friday 23 May 2025 00:53:22 +0000 (0:00:01.730) 0:10:08.427 ************ 2025-05-23 00:56:13.505233 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.505239 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:56:13.505245 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:56:13.505254 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:56:13.505260 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:56:13.505266 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:56:13.505272 | orchestrator | 2025-05-23 00:56:13.505278 | orchestrator | TASK [ceph-crash : create /var/lib/ceph/crash/posted] ************************** 2025-05-23 00:56:13.505284 | orchestrator | Friday 23 May 2025 00:53:25 +0000 (0:00:02.104) 0:10:10.531 ************ 2025-05-23 00:56:13.505290 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:56:13.505296 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:56:13.505302 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:56:13.505308 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:56:13.505314 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:56:13.505320 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:56:13.505326 | orchestrator | 2025-05-23 00:56:13.505332 | orchestrator | TASK [ceph-crash : include_tasks systemd.yml] ********************************** 2025-05-23 00:56:13.505339 | orchestrator | Friday 23 May 2025 00:53:26 +0000 (0:00:01.431) 0:10:11.962 ************ 2025-05-23 00:56:13.505345 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 00:56:13.505352 | orchestrator | 2025-05-23 00:56:13.505358 | orchestrator | TASK [ceph-crash : generate systemd unit file for ceph-crash container] ******** 2025-05-23 00:56:13.505368 | orchestrator | Friday 23 May 2025 00:53:27 +0000 (0:00:01.457) 0:10:13.420 ************ 2025-05-23 00:56:13.505374 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:56:13.505380 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:56:13.505402 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:56:13.505408 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:56:13.505414 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:56:13.505420 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:56:13.505426 | orchestrator | 2025-05-23 00:56:13.505433 | orchestrator | TASK [ceph-crash : start the ceph-crash service] ******************************* 2025-05-23 00:56:13.505439 | orchestrator | Friday 23 May 2025 00:53:29 +0000 (0:00:01.606) 0:10:15.027 ************ 2025-05-23 00:56:13.505445 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:56:13.505451 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:56:13.505457 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:56:13.505463 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:56:13.505469 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:56:13.505475 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:56:13.505481 | orchestrator | 2025-05-23 00:56:13.505487 | orchestrator | RUNNING HANDLER [ceph-handler : ceph crash handler] **************************** 2025-05-23 00:56:13.505493 | orchestrator | Friday 23 May 2025 00:53:33 +0000 (0:00:04.152) 0:10:19.179 ************ 2025-05-23 00:56:13.505500 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 00:56:13.505506 | orchestrator | 2025-05-23 00:56:13.505512 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called before restart] ****** 2025-05-23 00:56:13.505518 | orchestrator | Friday 23 May 2025 00:53:35 +0000 (0:00:01.352) 0:10:20.532 ************ 2025-05-23 00:56:13.505524 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.505530 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.505536 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.505542 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.505548 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.505555 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.505561 | orchestrator | 2025-05-23 00:56:13.505567 | orchestrator | RUNNING HANDLER [ceph-handler : restart the ceph-crash service] **************** 2025-05-23 00:56:13.505573 | orchestrator | Friday 23 May 2025 00:53:35 +0000 (0:00:00.679) 0:10:21.212 ************ 2025-05-23 00:56:13.505579 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:56:13.505585 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:56:13.505591 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:56:13.505597 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:56:13.505603 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:56:13.505609 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:56:13.505615 | orchestrator | 2025-05-23 00:56:13.505621 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called after restart] ******* 2025-05-23 00:56:13.505628 | orchestrator | Friday 23 May 2025 00:53:38 +0000 (0:00:02.331) 0:10:23.544 ************ 2025-05-23 00:56:13.505634 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:56:13.505640 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:56:13.505646 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:56:13.505652 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.505658 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.505664 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.505670 | orchestrator | 2025-05-23 00:56:13.505676 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-05-23 00:56:13.505682 | orchestrator | 2025-05-23 00:56:13.505689 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-23 00:56:13.505695 | orchestrator | Friday 23 May 2025 00:53:40 +0000 (0:00:02.094) 0:10:25.639 ************ 2025-05-23 00:56:13.505705 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 00:56:13.505712 | orchestrator | 2025-05-23 00:56:13.505718 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-23 00:56:13.505729 | orchestrator | Friday 23 May 2025 00:53:40 +0000 (0:00:00.472) 0:10:26.111 ************ 2025-05-23 00:56:13.505735 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.505741 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.505747 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.505753 | orchestrator | 2025-05-23 00:56:13.505760 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-23 00:56:13.505766 | orchestrator | Friday 23 May 2025 00:53:41 +0000 (0:00:00.415) 0:10:26.526 ************ 2025-05-23 00:56:13.505772 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.505778 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.505784 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.505790 | orchestrator | 2025-05-23 00:56:13.505796 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-23 00:56:13.505803 | orchestrator | Friday 23 May 2025 00:53:41 +0000 (0:00:00.693) 0:10:27.220 ************ 2025-05-23 00:56:13.505814 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.505820 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.505827 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.505833 | orchestrator | 2025-05-23 00:56:13.505839 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-23 00:56:13.505845 | orchestrator | Friday 23 May 2025 00:53:42 +0000 (0:00:00.768) 0:10:27.989 ************ 2025-05-23 00:56:13.505851 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.505857 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.505863 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.505869 | orchestrator | 2025-05-23 00:56:13.505876 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-23 00:56:13.505882 | orchestrator | Friday 23 May 2025 00:53:43 +0000 (0:00:00.718) 0:10:28.707 ************ 2025-05-23 00:56:13.505888 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.505894 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.505900 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.505906 | orchestrator | 2025-05-23 00:56:13.505912 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-23 00:56:13.505919 | orchestrator | Friday 23 May 2025 00:53:43 +0000 (0:00:00.691) 0:10:29.399 ************ 2025-05-23 00:56:13.505925 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.505931 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.505937 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.505943 | orchestrator | 2025-05-23 00:56:13.505949 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-23 00:56:13.505956 | orchestrator | Friday 23 May 2025 00:53:44 +0000 (0:00:00.384) 0:10:29.783 ************ 2025-05-23 00:56:13.505962 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.505968 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.505974 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.505980 | orchestrator | 2025-05-23 00:56:13.505987 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-23 00:56:13.505993 | orchestrator | Friday 23 May 2025 00:53:44 +0000 (0:00:00.412) 0:10:30.196 ************ 2025-05-23 00:56:13.505999 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.506005 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.506012 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.506046 | orchestrator | 2025-05-23 00:56:13.506057 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-23 00:56:13.506068 | orchestrator | Friday 23 May 2025 00:53:45 +0000 (0:00:00.368) 0:10:30.564 ************ 2025-05-23 00:56:13.506078 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.506089 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.506099 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.506111 | orchestrator | 2025-05-23 00:56:13.506122 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-23 00:56:13.506140 | orchestrator | Friday 23 May 2025 00:53:45 +0000 (0:00:00.681) 0:10:31.246 ************ 2025-05-23 00:56:13.506148 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.506154 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.506160 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.506166 | orchestrator | 2025-05-23 00:56:13.506172 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-23 00:56:13.506178 | orchestrator | Friday 23 May 2025 00:53:46 +0000 (0:00:00.409) 0:10:31.655 ************ 2025-05-23 00:56:13.506185 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.506191 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.506197 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.506203 | orchestrator | 2025-05-23 00:56:13.506209 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-23 00:56:13.506215 | orchestrator | Friday 23 May 2025 00:53:46 +0000 (0:00:00.828) 0:10:32.484 ************ 2025-05-23 00:56:13.506221 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.506227 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.506233 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.506240 | orchestrator | 2025-05-23 00:56:13.506246 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-23 00:56:13.506252 | orchestrator | Friday 23 May 2025 00:53:47 +0000 (0:00:00.333) 0:10:32.818 ************ 2025-05-23 00:56:13.506258 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.506264 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.506270 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.506276 | orchestrator | 2025-05-23 00:56:13.506282 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-23 00:56:13.506288 | orchestrator | Friday 23 May 2025 00:53:47 +0000 (0:00:00.625) 0:10:33.443 ************ 2025-05-23 00:56:13.506294 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.506300 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.506306 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.506312 | orchestrator | 2025-05-23 00:56:13.506319 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-23 00:56:13.506325 | orchestrator | Friday 23 May 2025 00:53:48 +0000 (0:00:00.402) 0:10:33.845 ************ 2025-05-23 00:56:13.506331 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.506337 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.506343 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.506354 | orchestrator | 2025-05-23 00:56:13.506361 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-23 00:56:13.506367 | orchestrator | Friday 23 May 2025 00:53:48 +0000 (0:00:00.389) 0:10:34.235 ************ 2025-05-23 00:56:13.506373 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.506379 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.506402 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.506410 | orchestrator | 2025-05-23 00:56:13.506416 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-23 00:56:13.506422 | orchestrator | Friday 23 May 2025 00:53:49 +0000 (0:00:00.343) 0:10:34.578 ************ 2025-05-23 00:56:13.506428 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.506434 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.506440 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.506446 | orchestrator | 2025-05-23 00:56:13.506452 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-23 00:56:13.506458 | orchestrator | Friday 23 May 2025 00:53:49 +0000 (0:00:00.624) 0:10:35.202 ************ 2025-05-23 00:56:13.506465 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.506475 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.506481 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.506487 | orchestrator | 2025-05-23 00:56:13.506493 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-23 00:56:13.506499 | orchestrator | Friday 23 May 2025 00:53:50 +0000 (0:00:00.348) 0:10:35.550 ************ 2025-05-23 00:56:13.506510 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.506516 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.506522 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.506528 | orchestrator | 2025-05-23 00:56:13.506534 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-23 00:56:13.506540 | orchestrator | Friday 23 May 2025 00:53:50 +0000 (0:00:00.344) 0:10:35.895 ************ 2025-05-23 00:56:13.506546 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.506552 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.506558 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.506564 | orchestrator | 2025-05-23 00:56:13.506570 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-23 00:56:13.506576 | orchestrator | Friday 23 May 2025 00:53:50 +0000 (0:00:00.325) 0:10:36.221 ************ 2025-05-23 00:56:13.506583 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.506589 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.506595 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.506601 | orchestrator | 2025-05-23 00:56:13.506607 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-23 00:56:13.506613 | orchestrator | Friday 23 May 2025 00:53:51 +0000 (0:00:00.612) 0:10:36.833 ************ 2025-05-23 00:56:13.506619 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.506625 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.506631 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.506637 | orchestrator | 2025-05-23 00:56:13.506644 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-23 00:56:13.506650 | orchestrator | Friday 23 May 2025 00:53:51 +0000 (0:00:00.333) 0:10:37.166 ************ 2025-05-23 00:56:13.506656 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.506662 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.506668 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.506674 | orchestrator | 2025-05-23 00:56:13.506680 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-23 00:56:13.506686 | orchestrator | Friday 23 May 2025 00:53:52 +0000 (0:00:00.345) 0:10:37.512 ************ 2025-05-23 00:56:13.506692 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.506698 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.506704 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.506710 | orchestrator | 2025-05-23 00:56:13.506716 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-23 00:56:13.506723 | orchestrator | Friday 23 May 2025 00:53:52 +0000 (0:00:00.410) 0:10:37.923 ************ 2025-05-23 00:56:13.506729 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.506735 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.506741 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.506747 | orchestrator | 2025-05-23 00:56:13.506753 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-23 00:56:13.506759 | orchestrator | Friday 23 May 2025 00:53:53 +0000 (0:00:00.814) 0:10:38.738 ************ 2025-05-23 00:56:13.506765 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.506771 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.506777 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.506783 | orchestrator | 2025-05-23 00:56:13.506789 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-23 00:56:13.506795 | orchestrator | Friday 23 May 2025 00:53:53 +0000 (0:00:00.382) 0:10:39.121 ************ 2025-05-23 00:56:13.506801 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.506807 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.506813 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.506819 | orchestrator | 2025-05-23 00:56:13.506825 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-23 00:56:13.506832 | orchestrator | Friday 23 May 2025 00:53:54 +0000 (0:00:00.378) 0:10:39.499 ************ 2025-05-23 00:56:13.506838 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.506848 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.506854 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.506860 | orchestrator | 2025-05-23 00:56:13.506866 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-23 00:56:13.506872 | orchestrator | Friday 23 May 2025 00:53:54 +0000 (0:00:00.322) 0:10:39.821 ************ 2025-05-23 00:56:13.506878 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.506884 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.506890 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.506896 | orchestrator | 2025-05-23 00:56:13.506902 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-23 00:56:13.506909 | orchestrator | Friday 23 May 2025 00:53:55 +0000 (0:00:00.749) 0:10:40.571 ************ 2025-05-23 00:56:13.506918 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.506924 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.506931 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.506937 | orchestrator | 2025-05-23 00:56:13.506943 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-23 00:56:13.506949 | orchestrator | Friday 23 May 2025 00:53:55 +0000 (0:00:00.381) 0:10:40.952 ************ 2025-05-23 00:56:13.506955 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.506961 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.506967 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.506973 | orchestrator | 2025-05-23 00:56:13.506979 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-23 00:56:13.506986 | orchestrator | Friday 23 May 2025 00:53:55 +0000 (0:00:00.318) 0:10:41.271 ************ 2025-05-23 00:56:13.506992 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.506998 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.507004 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.507010 | orchestrator | 2025-05-23 00:56:13.507016 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-23 00:56:13.507025 | orchestrator | Friday 23 May 2025 00:53:56 +0000 (0:00:00.404) 0:10:41.675 ************ 2025-05-23 00:56:13.507031 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-23 00:56:13.507037 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-23 00:56:13.507043 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.507050 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-23 00:56:13.507056 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-23 00:56:13.507062 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.507068 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-23 00:56:13.507074 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-23 00:56:13.507080 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.507086 | orchestrator | 2025-05-23 00:56:13.507092 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-23 00:56:13.507098 | orchestrator | Friday 23 May 2025 00:53:56 +0000 (0:00:00.791) 0:10:42.467 ************ 2025-05-23 00:56:13.507104 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-23 00:56:13.507110 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-23 00:56:13.507116 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.507122 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-23 00:56:13.507128 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-23 00:56:13.507134 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.507141 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-23 00:56:13.507147 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-23 00:56:13.507153 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.507159 | orchestrator | 2025-05-23 00:56:13.507165 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-23 00:56:13.507175 | orchestrator | Friday 23 May 2025 00:53:57 +0000 (0:00:00.382) 0:10:42.850 ************ 2025-05-23 00:56:13.507181 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.507187 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.507193 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.507199 | orchestrator | 2025-05-23 00:56:13.507205 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-23 00:56:13.507211 | orchestrator | Friday 23 May 2025 00:53:57 +0000 (0:00:00.353) 0:10:43.203 ************ 2025-05-23 00:56:13.507217 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.507224 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.507230 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.507236 | orchestrator | 2025-05-23 00:56:13.507242 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-23 00:56:13.507248 | orchestrator | Friday 23 May 2025 00:53:58 +0000 (0:00:00.355) 0:10:43.558 ************ 2025-05-23 00:56:13.507254 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.507260 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.507266 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.507272 | orchestrator | 2025-05-23 00:56:13.507278 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-23 00:56:13.507284 | orchestrator | Friday 23 May 2025 00:53:58 +0000 (0:00:00.751) 0:10:44.310 ************ 2025-05-23 00:56:13.507290 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.507296 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.507302 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.507308 | orchestrator | 2025-05-23 00:56:13.507314 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-23 00:56:13.507321 | orchestrator | Friday 23 May 2025 00:53:59 +0000 (0:00:00.363) 0:10:44.674 ************ 2025-05-23 00:56:13.507327 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.507333 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.507339 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.507345 | orchestrator | 2025-05-23 00:56:13.507351 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-23 00:56:13.507357 | orchestrator | Friday 23 May 2025 00:53:59 +0000 (0:00:00.351) 0:10:45.025 ************ 2025-05-23 00:56:13.507363 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.507369 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.507375 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.507381 | orchestrator | 2025-05-23 00:56:13.507421 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-23 00:56:13.507428 | orchestrator | Friday 23 May 2025 00:53:59 +0000 (0:00:00.329) 0:10:45.355 ************ 2025-05-23 00:56:13.507434 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-23 00:56:13.507440 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-23 00:56:13.507446 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-23 00:56:13.507453 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.507459 | orchestrator | 2025-05-23 00:56:13.507468 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-23 00:56:13.507475 | orchestrator | Friday 23 May 2025 00:54:00 +0000 (0:00:00.873) 0:10:46.228 ************ 2025-05-23 00:56:13.507481 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-23 00:56:13.507487 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-23 00:56:13.507493 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-23 00:56:13.507499 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.507505 | orchestrator | 2025-05-23 00:56:13.507511 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-23 00:56:13.507517 | orchestrator | Friday 23 May 2025 00:54:01 +0000 (0:00:00.418) 0:10:46.647 ************ 2025-05-23 00:56:13.507528 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-23 00:56:13.507534 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-23 00:56:13.507540 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-23 00:56:13.507546 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.507552 | orchestrator | 2025-05-23 00:56:13.507562 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-23 00:56:13.507568 | orchestrator | Friday 23 May 2025 00:54:01 +0000 (0:00:00.469) 0:10:47.117 ************ 2025-05-23 00:56:13.507575 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.507581 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.507587 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.507593 | orchestrator | 2025-05-23 00:56:13.507599 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-23 00:56:13.507605 | orchestrator | Friday 23 May 2025 00:54:01 +0000 (0:00:00.343) 0:10:47.460 ************ 2025-05-23 00:56:13.507611 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-23 00:56:13.507617 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.507624 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-23 00:56:13.507630 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.507636 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-23 00:56:13.507642 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.507648 | orchestrator | 2025-05-23 00:56:13.507654 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-23 00:56:13.507660 | orchestrator | Friday 23 May 2025 00:54:02 +0000 (0:00:00.466) 0:10:47.927 ************ 2025-05-23 00:56:13.507666 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.507672 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.507678 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.507684 | orchestrator | 2025-05-23 00:56:13.507690 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-23 00:56:13.507697 | orchestrator | Friday 23 May 2025 00:54:03 +0000 (0:00:00.671) 0:10:48.598 ************ 2025-05-23 00:56:13.507703 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.507709 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.507714 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.507720 | orchestrator | 2025-05-23 00:56:13.507726 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-23 00:56:13.507733 | orchestrator | Friday 23 May 2025 00:54:03 +0000 (0:00:00.345) 0:10:48.944 ************ 2025-05-23 00:56:13.507739 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-23 00:56:13.507745 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.507751 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-23 00:56:13.507757 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.507763 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-23 00:56:13.507769 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.507775 | orchestrator | 2025-05-23 00:56:13.507781 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-23 00:56:13.507787 | orchestrator | Friday 23 May 2025 00:54:03 +0000 (0:00:00.488) 0:10:49.433 ************ 2025-05-23 00:56:13.507793 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-23 00:56:13.507799 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.507806 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-23 00:56:13.507812 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.507818 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-23 00:56:13.507824 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.507830 | orchestrator | 2025-05-23 00:56:13.507836 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-23 00:56:13.507846 | orchestrator | Friday 23 May 2025 00:54:04 +0000 (0:00:00.354) 0:10:49.788 ************ 2025-05-23 00:56:13.507852 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-23 00:56:13.507858 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-23 00:56:13.507864 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-23 00:56:13.507870 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-23 00:56:13.507876 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-23 00:56:13.507882 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-23 00:56:13.507888 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.507894 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.507900 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-23 00:56:13.507906 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-23 00:56:13.507912 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-23 00:56:13.507918 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.507924 | orchestrator | 2025-05-23 00:56:13.507931 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-23 00:56:13.507940 | orchestrator | Friday 23 May 2025 00:54:05 +0000 (0:00:01.055) 0:10:50.844 ************ 2025-05-23 00:56:13.507945 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.507951 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.507956 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.507961 | orchestrator | 2025-05-23 00:56:13.507967 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-23 00:56:13.507972 | orchestrator | Friday 23 May 2025 00:54:05 +0000 (0:00:00.595) 0:10:51.439 ************ 2025-05-23 00:56:13.507978 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-23 00:56:13.507983 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.507988 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-23 00:56:13.507994 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.507999 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-23 00:56:13.508004 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.508010 | orchestrator | 2025-05-23 00:56:13.508015 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-23 00:56:13.508024 | orchestrator | Friday 23 May 2025 00:54:06 +0000 (0:00:00.906) 0:10:52.345 ************ 2025-05-23 00:56:13.508029 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.508034 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.508040 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.508045 | orchestrator | 2025-05-23 00:56:13.508050 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-23 00:56:13.508056 | orchestrator | Friday 23 May 2025 00:54:07 +0000 (0:00:00.545) 0:10:52.890 ************ 2025-05-23 00:56:13.508061 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.508066 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.508072 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.508077 | orchestrator | 2025-05-23 00:56:13.508082 | orchestrator | TASK [ceph-mds : include create_mds_filesystems.yml] *************************** 2025-05-23 00:56:13.508088 | orchestrator | Friday 23 May 2025 00:54:08 +0000 (0:00:00.855) 0:10:53.746 ************ 2025-05-23 00:56:13.508093 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.508098 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.508104 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-05-23 00:56:13.508109 | orchestrator | 2025-05-23 00:56:13.508114 | orchestrator | TASK [ceph-facts : get current default crush rule details] ********************* 2025-05-23 00:56:13.508120 | orchestrator | Friday 23 May 2025 00:54:08 +0000 (0:00:00.457) 0:10:54.203 ************ 2025-05-23 00:56:13.508125 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-23 00:56:13.508134 | orchestrator | 2025-05-23 00:56:13.508139 | orchestrator | TASK [ceph-facts : get current default crush rule name] ************************ 2025-05-23 00:56:13.508145 | orchestrator | Friday 23 May 2025 00:54:10 +0000 (0:00:01.743) 0:10:55.946 ************ 2025-05-23 00:56:13.508151 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-05-23 00:56:13.508158 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.508163 | orchestrator | 2025-05-23 00:56:13.508169 | orchestrator | TASK [ceph-mds : create filesystem pools] ************************************** 2025-05-23 00:56:13.508174 | orchestrator | Friday 23 May 2025 00:54:11 +0000 (0:00:00.679) 0:10:56.626 ************ 2025-05-23 00:56:13.508181 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-23 00:56:13.508188 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-23 00:56:13.508194 | orchestrator | 2025-05-23 00:56:13.508199 | orchestrator | TASK [ceph-mds : create ceph filesystem] *************************************** 2025-05-23 00:56:13.508205 | orchestrator | Friday 23 May 2025 00:54:18 +0000 (0:00:07.506) 0:11:04.132 ************ 2025-05-23 00:56:13.508210 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-23 00:56:13.508216 | orchestrator | 2025-05-23 00:56:13.508221 | orchestrator | TASK [ceph-mds : include common.yml] ******************************************* 2025-05-23 00:56:13.508226 | orchestrator | Friday 23 May 2025 00:54:21 +0000 (0:00:02.823) 0:11:06.956 ************ 2025-05-23 00:56:13.508232 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 00:56:13.508237 | orchestrator | 2025-05-23 00:56:13.508242 | orchestrator | TASK [ceph-mds : create bootstrap-mds and mds directories] ********************* 2025-05-23 00:56:13.508248 | orchestrator | Friday 23 May 2025 00:54:22 +0000 (0:00:00.600) 0:11:07.556 ************ 2025-05-23 00:56:13.508253 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-23 00:56:13.508258 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-23 00:56:13.508264 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-05-23 00:56:13.508269 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-05-23 00:56:13.508274 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-23 00:56:13.508280 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-05-23 00:56:13.508285 | orchestrator | 2025-05-23 00:56:13.508293 | orchestrator | TASK [ceph-mds : get keys from monitors] *************************************** 2025-05-23 00:56:13.508298 | orchestrator | Friday 23 May 2025 00:54:23 +0000 (0:00:01.530) 0:11:09.087 ************ 2025-05-23 00:56:13.508304 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-23 00:56:13.508309 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-23 00:56:13.508315 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-23 00:56:13.508320 | orchestrator | 2025-05-23 00:56:13.508325 | orchestrator | TASK [ceph-mds : copy ceph key(s) if needed] *********************************** 2025-05-23 00:56:13.508331 | orchestrator | Friday 23 May 2025 00:54:25 +0000 (0:00:01.792) 0:11:10.879 ************ 2025-05-23 00:56:13.508336 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-23 00:56:13.508341 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-23 00:56:13.508350 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:56:13.508355 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-23 00:56:13.508361 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-23 00:56:13.508367 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:56:13.508375 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-23 00:56:13.508380 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-23 00:56:13.508402 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:56:13.508408 | orchestrator | 2025-05-23 00:56:13.508413 | orchestrator | TASK [ceph-mds : non_containerized.yml] **************************************** 2025-05-23 00:56:13.508418 | orchestrator | Friday 23 May 2025 00:54:26 +0000 (0:00:01.210) 0:11:12.090 ************ 2025-05-23 00:56:13.508423 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.508429 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.508434 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.508439 | orchestrator | 2025-05-23 00:56:13.508445 | orchestrator | TASK [ceph-mds : containerized.yml] ******************************************** 2025-05-23 00:56:13.508450 | orchestrator | Friday 23 May 2025 00:54:26 +0000 (0:00:00.334) 0:11:12.425 ************ 2025-05-23 00:56:13.508455 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 00:56:13.508461 | orchestrator | 2025-05-23 00:56:13.508466 | orchestrator | TASK [ceph-mds : include_tasks systemd.yml] ************************************ 2025-05-23 00:56:13.508471 | orchestrator | Friday 23 May 2025 00:54:27 +0000 (0:00:00.843) 0:11:13.268 ************ 2025-05-23 00:56:13.508477 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 00:56:13.508482 | orchestrator | 2025-05-23 00:56:13.508487 | orchestrator | TASK [ceph-mds : generate systemd unit file] *********************************** 2025-05-23 00:56:13.508493 | orchestrator | Friday 23 May 2025 00:54:28 +0000 (0:00:00.546) 0:11:13.815 ************ 2025-05-23 00:56:13.508498 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:56:13.508503 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:56:13.508509 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:56:13.508514 | orchestrator | 2025-05-23 00:56:13.508519 | orchestrator | TASK [ceph-mds : generate systemd ceph-mds target file] ************************ 2025-05-23 00:56:13.508525 | orchestrator | Friday 23 May 2025 00:54:29 +0000 (0:00:01.486) 0:11:15.301 ************ 2025-05-23 00:56:13.508530 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:56:13.508535 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:56:13.508541 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:56:13.508546 | orchestrator | 2025-05-23 00:56:13.508551 | orchestrator | TASK [ceph-mds : enable ceph-mds.target] *************************************** 2025-05-23 00:56:13.508556 | orchestrator | Friday 23 May 2025 00:54:30 +0000 (0:00:01.174) 0:11:16.476 ************ 2025-05-23 00:56:13.508562 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:56:13.508567 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:56:13.508572 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:56:13.508578 | orchestrator | 2025-05-23 00:56:13.508583 | orchestrator | TASK [ceph-mds : systemd start mds container] ********************************** 2025-05-23 00:56:13.508588 | orchestrator | Friday 23 May 2025 00:54:32 +0000 (0:00:01.665) 0:11:18.141 ************ 2025-05-23 00:56:13.508594 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:56:13.508599 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:56:13.508604 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:56:13.508609 | orchestrator | 2025-05-23 00:56:13.508615 | orchestrator | TASK [ceph-mds : wait for mds socket to exist] ********************************* 2025-05-23 00:56:13.508620 | orchestrator | Friday 23 May 2025 00:54:34 +0000 (0:00:02.224) 0:11:20.366 ************ 2025-05-23 00:56:13.508625 | orchestrator | FAILED - RETRYING: [testbed-node-3]: wait for mds socket to exist (5 retries left). 2025-05-23 00:56:13.508631 | orchestrator | FAILED - RETRYING: [testbed-node-4]: wait for mds socket to exist (5 retries left). 2025-05-23 00:56:13.508636 | orchestrator | FAILED - RETRYING: [testbed-node-5]: wait for mds socket to exist (5 retries left). 2025-05-23 00:56:13.508645 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.508651 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.508656 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.508661 | orchestrator | 2025-05-23 00:56:13.508667 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-23 00:56:13.508672 | orchestrator | Friday 23 May 2025 00:54:52 +0000 (0:00:17.138) 0:11:37.505 ************ 2025-05-23 00:56:13.508677 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:56:13.508683 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:56:13.508688 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:56:13.508694 | orchestrator | 2025-05-23 00:56:13.508699 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-05-23 00:56:13.508704 | orchestrator | Friday 23 May 2025 00:54:52 +0000 (0:00:00.687) 0:11:38.192 ************ 2025-05-23 00:56:13.508710 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 00:56:13.508715 | orchestrator | 2025-05-23 00:56:13.508720 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called before restart] ******** 2025-05-23 00:56:13.508725 | orchestrator | Friday 23 May 2025 00:54:53 +0000 (0:00:00.883) 0:11:39.075 ************ 2025-05-23 00:56:13.508734 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.508739 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.508745 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.508750 | orchestrator | 2025-05-23 00:56:13.508756 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-05-23 00:56:13.508761 | orchestrator | Friday 23 May 2025 00:54:53 +0000 (0:00:00.368) 0:11:39.443 ************ 2025-05-23 00:56:13.508766 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:56:13.508772 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:56:13.508777 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:56:13.508783 | orchestrator | 2025-05-23 00:56:13.508788 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mds daemon(s)] ******************** 2025-05-23 00:56:13.508793 | orchestrator | Friday 23 May 2025 00:54:55 +0000 (0:00:01.303) 0:11:40.747 ************ 2025-05-23 00:56:13.508798 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-23 00:56:13.508804 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-23 00:56:13.508809 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-23 00:56:13.508814 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.508820 | orchestrator | 2025-05-23 00:56:13.508830 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-05-23 00:56:13.508835 | orchestrator | Friday 23 May 2025 00:54:56 +0000 (0:00:00.921) 0:11:41.668 ************ 2025-05-23 00:56:13.508841 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.508846 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.508851 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.508857 | orchestrator | 2025-05-23 00:56:13.508862 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-23 00:56:13.508867 | orchestrator | Friday 23 May 2025 00:54:56 +0000 (0:00:00.661) 0:11:42.330 ************ 2025-05-23 00:56:13.508873 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:56:13.508878 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:56:13.508884 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:56:13.508890 | orchestrator | 2025-05-23 00:56:13.508895 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-05-23 00:56:13.508900 | orchestrator | 2025-05-23 00:56:13.508906 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-23 00:56:13.508911 | orchestrator | Friday 23 May 2025 00:54:58 +0000 (0:00:01.984) 0:11:44.314 ************ 2025-05-23 00:56:13.508917 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 00:56:13.508922 | orchestrator | 2025-05-23 00:56:13.508927 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-23 00:56:13.508937 | orchestrator | Friday 23 May 2025 00:54:59 +0000 (0:00:00.713) 0:11:45.028 ************ 2025-05-23 00:56:13.508942 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.508947 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.508953 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.508958 | orchestrator | 2025-05-23 00:56:13.508963 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-23 00:56:13.508969 | orchestrator | Friday 23 May 2025 00:54:59 +0000 (0:00:00.311) 0:11:45.340 ************ 2025-05-23 00:56:13.508974 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.508980 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.508985 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.508991 | orchestrator | 2025-05-23 00:56:13.508996 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-23 00:56:13.509001 | orchestrator | Friday 23 May 2025 00:55:00 +0000 (0:00:00.730) 0:11:46.070 ************ 2025-05-23 00:56:13.509007 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.509012 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.509017 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.509023 | orchestrator | 2025-05-23 00:56:13.509028 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-23 00:56:13.509034 | orchestrator | Friday 23 May 2025 00:55:01 +0000 (0:00:00.711) 0:11:46.782 ************ 2025-05-23 00:56:13.509039 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.509044 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.509049 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.509055 | orchestrator | 2025-05-23 00:56:13.509060 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-23 00:56:13.509065 | orchestrator | Friday 23 May 2025 00:55:02 +0000 (0:00:01.113) 0:11:47.895 ************ 2025-05-23 00:56:13.509070 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.509076 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.509081 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.509086 | orchestrator | 2025-05-23 00:56:13.509092 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-23 00:56:13.509097 | orchestrator | Friday 23 May 2025 00:55:02 +0000 (0:00:00.350) 0:11:48.246 ************ 2025-05-23 00:56:13.509102 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.509108 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.509113 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.509118 | orchestrator | 2025-05-23 00:56:13.509124 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-23 00:56:13.509129 | orchestrator | Friday 23 May 2025 00:55:03 +0000 (0:00:00.320) 0:11:48.566 ************ 2025-05-23 00:56:13.509134 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.509139 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.509145 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.509150 | orchestrator | 2025-05-23 00:56:13.509155 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-23 00:56:13.509160 | orchestrator | Friday 23 May 2025 00:55:03 +0000 (0:00:00.584) 0:11:49.151 ************ 2025-05-23 00:56:13.509166 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.509171 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.509176 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.509182 | orchestrator | 2025-05-23 00:56:13.509187 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-23 00:56:13.509192 | orchestrator | Friday 23 May 2025 00:55:03 +0000 (0:00:00.317) 0:11:49.469 ************ 2025-05-23 00:56:13.509197 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.509203 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.509211 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.509216 | orchestrator | 2025-05-23 00:56:13.509221 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-23 00:56:13.509227 | orchestrator | Friday 23 May 2025 00:55:04 +0000 (0:00:00.319) 0:11:49.788 ************ 2025-05-23 00:56:13.509234 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.509240 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.509245 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.509250 | orchestrator | 2025-05-23 00:56:13.509256 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-23 00:56:13.509261 | orchestrator | Friday 23 May 2025 00:55:04 +0000 (0:00:00.322) 0:11:50.111 ************ 2025-05-23 00:56:13.509266 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.509271 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.509277 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.509282 | orchestrator | 2025-05-23 00:56:13.509287 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-23 00:56:13.509293 | orchestrator | Friday 23 May 2025 00:55:05 +0000 (0:00:01.093) 0:11:51.204 ************ 2025-05-23 00:56:13.509298 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.509306 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.509311 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.509317 | orchestrator | 2025-05-23 00:56:13.509322 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-23 00:56:13.509327 | orchestrator | Friday 23 May 2025 00:55:06 +0000 (0:00:00.349) 0:11:51.554 ************ 2025-05-23 00:56:13.509332 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.509338 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.509343 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.509348 | orchestrator | 2025-05-23 00:56:13.509353 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-23 00:56:13.509359 | orchestrator | Friday 23 May 2025 00:55:06 +0000 (0:00:00.343) 0:11:51.897 ************ 2025-05-23 00:56:13.509364 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.509369 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.509375 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.509380 | orchestrator | 2025-05-23 00:56:13.509401 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-23 00:56:13.509407 | orchestrator | Friday 23 May 2025 00:55:06 +0000 (0:00:00.374) 0:11:52.272 ************ 2025-05-23 00:56:13.509412 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.509418 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.509423 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.509428 | orchestrator | 2025-05-23 00:56:13.509434 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-23 00:56:13.509439 | orchestrator | Friday 23 May 2025 00:55:07 +0000 (0:00:00.665) 0:11:52.937 ************ 2025-05-23 00:56:13.509444 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.509449 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.509454 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.509460 | orchestrator | 2025-05-23 00:56:13.509465 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-23 00:56:13.509470 | orchestrator | Friday 23 May 2025 00:55:07 +0000 (0:00:00.300) 0:11:53.238 ************ 2025-05-23 00:56:13.509475 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.509481 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.509486 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.509491 | orchestrator | 2025-05-23 00:56:13.509497 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-23 00:56:13.509502 | orchestrator | Friday 23 May 2025 00:55:08 +0000 (0:00:00.301) 0:11:53.540 ************ 2025-05-23 00:56:13.509507 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.509512 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.509518 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.509523 | orchestrator | 2025-05-23 00:56:13.509528 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-23 00:56:13.509534 | orchestrator | Friday 23 May 2025 00:55:08 +0000 (0:00:00.291) 0:11:53.831 ************ 2025-05-23 00:56:13.509543 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.509548 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.509553 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.509559 | orchestrator | 2025-05-23 00:56:13.509564 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-23 00:56:13.509569 | orchestrator | Friday 23 May 2025 00:55:08 +0000 (0:00:00.483) 0:11:54.315 ************ 2025-05-23 00:56:13.509575 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.509580 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.509585 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.509590 | orchestrator | 2025-05-23 00:56:13.509596 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-23 00:56:13.509601 | orchestrator | Friday 23 May 2025 00:55:09 +0000 (0:00:00.309) 0:11:54.625 ************ 2025-05-23 00:56:13.509606 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.509611 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.509617 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.509622 | orchestrator | 2025-05-23 00:56:13.509627 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-23 00:56:13.509632 | orchestrator | Friday 23 May 2025 00:55:09 +0000 (0:00:00.325) 0:11:54.951 ************ 2025-05-23 00:56:13.509638 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.509643 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.509648 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.509654 | orchestrator | 2025-05-23 00:56:13.509659 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-23 00:56:13.509665 | orchestrator | Friday 23 May 2025 00:55:09 +0000 (0:00:00.308) 0:11:55.259 ************ 2025-05-23 00:56:13.509670 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.509675 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.509680 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.509685 | orchestrator | 2025-05-23 00:56:13.509691 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-23 00:56:13.509696 | orchestrator | Friday 23 May 2025 00:55:10 +0000 (0:00:00.512) 0:11:55.771 ************ 2025-05-23 00:56:13.509701 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.509706 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.509712 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.509717 | orchestrator | 2025-05-23 00:56:13.509725 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-23 00:56:13.509731 | orchestrator | Friday 23 May 2025 00:55:10 +0000 (0:00:00.309) 0:11:56.081 ************ 2025-05-23 00:56:13.509737 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.509742 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.509747 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.509753 | orchestrator | 2025-05-23 00:56:13.509758 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-23 00:56:13.509763 | orchestrator | Friday 23 May 2025 00:55:10 +0000 (0:00:00.288) 0:11:56.370 ************ 2025-05-23 00:56:13.509768 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.509774 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.509779 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.509784 | orchestrator | 2025-05-23 00:56:13.509789 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-23 00:56:13.509795 | orchestrator | Friday 23 May 2025 00:55:11 +0000 (0:00:00.275) 0:11:56.645 ************ 2025-05-23 00:56:13.509800 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.509808 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.509814 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.509819 | orchestrator | 2025-05-23 00:56:13.509824 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-23 00:56:13.509830 | orchestrator | Friday 23 May 2025 00:55:11 +0000 (0:00:00.485) 0:11:57.130 ************ 2025-05-23 00:56:13.509835 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.509844 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.509849 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.509854 | orchestrator | 2025-05-23 00:56:13.509860 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-23 00:56:13.509865 | orchestrator | Friday 23 May 2025 00:55:11 +0000 (0:00:00.288) 0:11:57.418 ************ 2025-05-23 00:56:13.509871 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.509876 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.509881 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.509886 | orchestrator | 2025-05-23 00:56:13.509892 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-23 00:56:13.509897 | orchestrator | Friday 23 May 2025 00:55:12 +0000 (0:00:00.296) 0:11:57.715 ************ 2025-05-23 00:56:13.509902 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.509908 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.509913 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.509918 | orchestrator | 2025-05-23 00:56:13.509924 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-23 00:56:13.509929 | orchestrator | Friday 23 May 2025 00:55:12 +0000 (0:00:00.303) 0:11:58.018 ************ 2025-05-23 00:56:13.509934 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.509940 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.509945 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.509950 | orchestrator | 2025-05-23 00:56:13.509956 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-23 00:56:13.509961 | orchestrator | Friday 23 May 2025 00:55:13 +0000 (0:00:00.474) 0:11:58.492 ************ 2025-05-23 00:56:13.509967 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.509972 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.509977 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.509982 | orchestrator | 2025-05-23 00:56:13.509988 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-23 00:56:13.509993 | orchestrator | Friday 23 May 2025 00:55:13 +0000 (0:00:00.307) 0:11:58.799 ************ 2025-05-23 00:56:13.509998 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-23 00:56:13.510004 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-23 00:56:13.510009 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.510031 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-23 00:56:13.510038 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-23 00:56:13.510043 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.510048 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-23 00:56:13.510054 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-23 00:56:13.510059 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.510064 | orchestrator | 2025-05-23 00:56:13.510069 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-23 00:56:13.510075 | orchestrator | Friday 23 May 2025 00:55:13 +0000 (0:00:00.371) 0:11:59.171 ************ 2025-05-23 00:56:13.510080 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-23 00:56:13.510085 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-23 00:56:13.510090 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.510096 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-23 00:56:13.510101 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-23 00:56:13.510106 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.510112 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-23 00:56:13.510117 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-23 00:56:13.510122 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.510127 | orchestrator | 2025-05-23 00:56:13.510133 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-23 00:56:13.510141 | orchestrator | Friday 23 May 2025 00:55:14 +0000 (0:00:00.320) 0:11:59.491 ************ 2025-05-23 00:56:13.510147 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.510152 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.510157 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.510163 | orchestrator | 2025-05-23 00:56:13.510168 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-23 00:56:13.510173 | orchestrator | Friday 23 May 2025 00:55:14 +0000 (0:00:00.548) 0:12:00.040 ************ 2025-05-23 00:56:13.510178 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.510184 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.510192 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.510197 | orchestrator | 2025-05-23 00:56:13.510203 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-23 00:56:13.510208 | orchestrator | Friday 23 May 2025 00:55:14 +0000 (0:00:00.326) 0:12:00.367 ************ 2025-05-23 00:56:13.510214 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.510219 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.510224 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.510230 | orchestrator | 2025-05-23 00:56:13.510235 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-23 00:56:13.510240 | orchestrator | Friday 23 May 2025 00:55:15 +0000 (0:00:00.335) 0:12:00.702 ************ 2025-05-23 00:56:13.510246 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.510251 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.510256 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.510261 | orchestrator | 2025-05-23 00:56:13.510267 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-23 00:56:13.510275 | orchestrator | Friday 23 May 2025 00:55:15 +0000 (0:00:00.347) 0:12:01.050 ************ 2025-05-23 00:56:13.510280 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.510286 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.510291 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.510296 | orchestrator | 2025-05-23 00:56:13.510302 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-23 00:56:13.510307 | orchestrator | Friday 23 May 2025 00:55:16 +0000 (0:00:00.555) 0:12:01.605 ************ 2025-05-23 00:56:13.510313 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.510318 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.510324 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.510332 | orchestrator | 2025-05-23 00:56:13.510342 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-23 00:56:13.510351 | orchestrator | Friday 23 May 2025 00:55:16 +0000 (0:00:00.357) 0:12:01.963 ************ 2025-05-23 00:56:13.510359 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-23 00:56:13.510368 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-23 00:56:13.510377 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-23 00:56:13.510400 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.510409 | orchestrator | 2025-05-23 00:56:13.510418 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-23 00:56:13.510426 | orchestrator | Friday 23 May 2025 00:55:16 +0000 (0:00:00.407) 0:12:02.370 ************ 2025-05-23 00:56:13.510434 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-23 00:56:13.510443 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-23 00:56:13.510453 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-23 00:56:13.510461 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.510470 | orchestrator | 2025-05-23 00:56:13.510479 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-23 00:56:13.510488 | orchestrator | Friday 23 May 2025 00:55:17 +0000 (0:00:00.441) 0:12:02.811 ************ 2025-05-23 00:56:13.510503 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-23 00:56:13.510510 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-23 00:56:13.510519 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-23 00:56:13.510528 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.510536 | orchestrator | 2025-05-23 00:56:13.510546 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-23 00:56:13.510556 | orchestrator | Friday 23 May 2025 00:55:17 +0000 (0:00:00.458) 0:12:03.270 ************ 2025-05-23 00:56:13.510565 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.510573 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.510582 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.510590 | orchestrator | 2025-05-23 00:56:13.510598 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-23 00:56:13.510607 | orchestrator | Friday 23 May 2025 00:55:18 +0000 (0:00:00.339) 0:12:03.609 ************ 2025-05-23 00:56:13.510616 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-23 00:56:13.510624 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.510633 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-23 00:56:13.510642 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.510650 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-23 00:56:13.510659 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.510667 | orchestrator | 2025-05-23 00:56:13.510677 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-23 00:56:13.510686 | orchestrator | Friday 23 May 2025 00:55:18 +0000 (0:00:00.762) 0:12:04.371 ************ 2025-05-23 00:56:13.510695 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.510704 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.510712 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.510721 | orchestrator | 2025-05-23 00:56:13.510730 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-23 00:56:13.510739 | orchestrator | Friday 23 May 2025 00:55:19 +0000 (0:00:00.339) 0:12:04.711 ************ 2025-05-23 00:56:13.510749 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.510758 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.510767 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.510776 | orchestrator | 2025-05-23 00:56:13.510786 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-23 00:56:13.510795 | orchestrator | Friday 23 May 2025 00:55:19 +0000 (0:00:00.356) 0:12:05.067 ************ 2025-05-23 00:56:13.510804 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-23 00:56:13.510810 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.510815 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-23 00:56:13.510820 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.510826 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-23 00:56:13.510831 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.510836 | orchestrator | 2025-05-23 00:56:13.510848 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-23 00:56:13.510854 | orchestrator | Friday 23 May 2025 00:55:20 +0000 (0:00:00.483) 0:12:05.551 ************ 2025-05-23 00:56:13.510860 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-23 00:56:13.510865 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.510871 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-23 00:56:13.510876 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.510882 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-23 00:56:13.510887 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.510892 | orchestrator | 2025-05-23 00:56:13.510898 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-23 00:56:13.510913 | orchestrator | Friday 23 May 2025 00:55:20 +0000 (0:00:00.652) 0:12:06.204 ************ 2025-05-23 00:56:13.510919 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-23 00:56:13.510924 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-23 00:56:13.510930 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-23 00:56:13.510935 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-23 00:56:13.510940 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-23 00:56:13.510946 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-23 00:56:13.510951 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.510956 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.510962 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-23 00:56:13.510967 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-23 00:56:13.510972 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-23 00:56:13.510977 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.510983 | orchestrator | 2025-05-23 00:56:13.510988 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-23 00:56:13.510994 | orchestrator | Friday 23 May 2025 00:55:21 +0000 (0:00:00.613) 0:12:06.817 ************ 2025-05-23 00:56:13.510999 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.511004 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.511009 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.511015 | orchestrator | 2025-05-23 00:56:13.511020 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-23 00:56:13.511025 | orchestrator | Friday 23 May 2025 00:55:22 +0000 (0:00:00.780) 0:12:07.598 ************ 2025-05-23 00:56:13.511030 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-23 00:56:13.511036 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.511041 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-23 00:56:13.511046 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.511051 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-23 00:56:13.511057 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.511062 | orchestrator | 2025-05-23 00:56:13.511067 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-23 00:56:13.511072 | orchestrator | Friday 23 May 2025 00:55:22 +0000 (0:00:00.590) 0:12:08.189 ************ 2025-05-23 00:56:13.511078 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.511083 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.511088 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.511093 | orchestrator | 2025-05-23 00:56:13.511099 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-23 00:56:13.511104 | orchestrator | Friday 23 May 2025 00:55:23 +0000 (0:00:00.823) 0:12:09.012 ************ 2025-05-23 00:56:13.511109 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.511114 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.511120 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.511125 | orchestrator | 2025-05-23 00:56:13.511130 | orchestrator | TASK [ceph-rgw : include common.yml] ******************************************* 2025-05-23 00:56:13.511135 | orchestrator | Friday 23 May 2025 00:55:24 +0000 (0:00:00.573) 0:12:09.586 ************ 2025-05-23 00:56:13.511141 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 00:56:13.511146 | orchestrator | 2025-05-23 00:56:13.511151 | orchestrator | TASK [ceph-rgw : create rados gateway directories] ***************************** 2025-05-23 00:56:13.511157 | orchestrator | Friday 23 May 2025 00:55:25 +0000 (0:00:00.955) 0:12:10.542 ************ 2025-05-23 00:56:13.511162 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2025-05-23 00:56:13.511167 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2025-05-23 00:56:13.511177 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2025-05-23 00:56:13.511182 | orchestrator | 2025-05-23 00:56:13.511187 | orchestrator | TASK [ceph-rgw : get keys from monitors] *************************************** 2025-05-23 00:56:13.511193 | orchestrator | Friday 23 May 2025 00:55:25 +0000 (0:00:00.716) 0:12:11.258 ************ 2025-05-23 00:56:13.511198 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-23 00:56:13.511203 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-23 00:56:13.511208 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-23 00:56:13.511214 | orchestrator | 2025-05-23 00:56:13.511219 | orchestrator | TASK [ceph-rgw : copy ceph key(s) if needed] *********************************** 2025-05-23 00:56:13.511224 | orchestrator | Friday 23 May 2025 00:55:27 +0000 (0:00:01.906) 0:12:13.164 ************ 2025-05-23 00:56:13.511229 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-23 00:56:13.511235 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-23 00:56:13.511240 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:56:13.511248 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-23 00:56:13.511253 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-23 00:56:13.511258 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:56:13.511264 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-23 00:56:13.511269 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-23 00:56:13.511274 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:56:13.511279 | orchestrator | 2025-05-23 00:56:13.511285 | orchestrator | TASK [ceph-rgw : copy SSL certificate & key data to certificate path] ********** 2025-05-23 00:56:13.511290 | orchestrator | Friday 23 May 2025 00:55:28 +0000 (0:00:01.254) 0:12:14.419 ************ 2025-05-23 00:56:13.511295 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.511301 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.511306 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.511311 | orchestrator | 2025-05-23 00:56:13.511316 | orchestrator | TASK [ceph-rgw : include_tasks pre_requisite.yml] ****************************** 2025-05-23 00:56:13.511321 | orchestrator | Friday 23 May 2025 00:55:29 +0000 (0:00:00.574) 0:12:14.993 ************ 2025-05-23 00:56:13.511327 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.511335 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.511340 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.511345 | orchestrator | 2025-05-23 00:56:13.511351 | orchestrator | TASK [ceph-rgw : rgw pool creation tasks] ************************************** 2025-05-23 00:56:13.511356 | orchestrator | Friday 23 May 2025 00:55:29 +0000 (0:00:00.367) 0:12:15.361 ************ 2025-05-23 00:56:13.511361 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-05-23 00:56:13.511366 | orchestrator | 2025-05-23 00:56:13.511372 | orchestrator | TASK [ceph-rgw : create ec profile] ******************************************** 2025-05-23 00:56:13.511377 | orchestrator | Friday 23 May 2025 00:55:30 +0000 (0:00:00.238) 0:12:15.600 ************ 2025-05-23 00:56:13.511418 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-23 00:56:13.511426 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-23 00:56:13.511432 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-23 00:56:13.511438 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-23 00:56:13.511443 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-23 00:56:13.511448 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.511454 | orchestrator | 2025-05-23 00:56:13.511459 | orchestrator | TASK [ceph-rgw : set crush rule] *********************************************** 2025-05-23 00:56:13.511468 | orchestrator | Friday 23 May 2025 00:55:30 +0000 (0:00:00.661) 0:12:16.262 ************ 2025-05-23 00:56:13.511474 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-23 00:56:13.511479 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-23 00:56:13.511484 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-23 00:56:13.511490 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-23 00:56:13.511495 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-23 00:56:13.511500 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.511506 | orchestrator | 2025-05-23 00:56:13.511510 | orchestrator | TASK [ceph-rgw : create ec pools for rgw] ************************************** 2025-05-23 00:56:13.511515 | orchestrator | Friday 23 May 2025 00:55:31 +0000 (0:00:00.930) 0:12:17.193 ************ 2025-05-23 00:56:13.511520 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-23 00:56:13.511524 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-23 00:56:13.511529 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-23 00:56:13.511534 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-23 00:56:13.511538 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-23 00:56:13.511543 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.511548 | orchestrator | 2025-05-23 00:56:13.511553 | orchestrator | TASK [ceph-rgw : create replicated pools for rgw] ****************************** 2025-05-23 00:56:13.511557 | orchestrator | Friday 23 May 2025 00:55:32 +0000 (0:00:00.638) 0:12:17.831 ************ 2025-05-23 00:56:13.511562 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-23 00:56:13.511571 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-23 00:56:13.511576 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-23 00:56:13.511581 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-23 00:56:13.511585 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-23 00:56:13.511590 | orchestrator | 2025-05-23 00:56:13.511595 | orchestrator | TASK [ceph-rgw : include_tasks openstack-keystone.yml] ************************* 2025-05-23 00:56:13.511605 | orchestrator | Friday 23 May 2025 00:55:56 +0000 (0:00:23.880) 0:12:41.711 ************ 2025-05-23 00:56:13.511610 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.511614 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.511619 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.511624 | orchestrator | 2025-05-23 00:56:13.511629 | orchestrator | TASK [ceph-rgw : include_tasks start_radosgw.yml] ****************************** 2025-05-23 00:56:13.511633 | orchestrator | Friday 23 May 2025 00:55:56 +0000 (0:00:00.539) 0:12:42.251 ************ 2025-05-23 00:56:13.511641 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.511646 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.511651 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.511656 | orchestrator | 2025-05-23 00:56:13.511660 | orchestrator | TASK [ceph-rgw : include start_docker_rgw.yml] ********************************* 2025-05-23 00:56:13.511665 | orchestrator | Friday 23 May 2025 00:55:57 +0000 (0:00:00.350) 0:12:42.602 ************ 2025-05-23 00:56:13.511670 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 00:56:13.511675 | orchestrator | 2025-05-23 00:56:13.511679 | orchestrator | TASK [ceph-rgw : include_task systemd.yml] ************************************* 2025-05-23 00:56:13.511684 | orchestrator | Friday 23 May 2025 00:55:57 +0000 (0:00:00.561) 0:12:43.163 ************ 2025-05-23 00:56:13.511689 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 00:56:13.511693 | orchestrator | 2025-05-23 00:56:13.511698 | orchestrator | TASK [ceph-rgw : generate systemd unit file] *********************************** 2025-05-23 00:56:13.511703 | orchestrator | Friday 23 May 2025 00:55:58 +0000 (0:00:00.892) 0:12:44.056 ************ 2025-05-23 00:56:13.511707 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:56:13.511712 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:56:13.511717 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:56:13.511721 | orchestrator | 2025-05-23 00:56:13.511726 | orchestrator | TASK [ceph-rgw : generate systemd ceph-radosgw target file] ******************** 2025-05-23 00:56:13.511731 | orchestrator | Friday 23 May 2025 00:55:59 +0000 (0:00:01.244) 0:12:45.300 ************ 2025-05-23 00:56:13.511735 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:56:13.511740 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:56:13.511745 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:56:13.511749 | orchestrator | 2025-05-23 00:56:13.511754 | orchestrator | TASK [ceph-rgw : enable ceph-radosgw.target] *********************************** 2025-05-23 00:56:13.511759 | orchestrator | Friday 23 May 2025 00:56:00 +0000 (0:00:01.159) 0:12:46.459 ************ 2025-05-23 00:56:13.511763 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:56:13.511768 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:56:13.511773 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:56:13.511777 | orchestrator | 2025-05-23 00:56:13.511782 | orchestrator | TASK [ceph-rgw : systemd start rgw container] ********************************** 2025-05-23 00:56:13.511787 | orchestrator | Friday 23 May 2025 00:56:02 +0000 (0:00:01.980) 0:12:48.440 ************ 2025-05-23 00:56:13.511791 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-23 00:56:13.511796 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-23 00:56:13.511801 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-23 00:56:13.511806 | orchestrator | 2025-05-23 00:56:13.511810 | orchestrator | TASK [ceph-rgw : include_tasks multisite/main.yml] ***************************** 2025-05-23 00:56:13.511815 | orchestrator | Friday 23 May 2025 00:56:04 +0000 (0:00:01.954) 0:12:50.395 ************ 2025-05-23 00:56:13.511820 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.511824 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:56:13.511829 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:56:13.511834 | orchestrator | 2025-05-23 00:56:13.511838 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-23 00:56:13.511843 | orchestrator | Friday 23 May 2025 00:56:06 +0000 (0:00:01.175) 0:12:51.570 ************ 2025-05-23 00:56:13.511848 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:56:13.511852 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:56:13.511857 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:56:13.511862 | orchestrator | 2025-05-23 00:56:13.511866 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-05-23 00:56:13.511875 | orchestrator | Friday 23 May 2025 00:56:06 +0000 (0:00:00.736) 0:12:52.307 ************ 2025-05-23 00:56:13.511880 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 00:56:13.511884 | orchestrator | 2025-05-23 00:56:13.511889 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-05-23 00:56:13.511896 | orchestrator | Friday 23 May 2025 00:56:07 +0000 (0:00:00.770) 0:12:53.077 ************ 2025-05-23 00:56:13.511901 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.511906 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.511911 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.511915 | orchestrator | 2025-05-23 00:56:13.511920 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-05-23 00:56:13.511925 | orchestrator | Friday 23 May 2025 00:56:07 +0000 (0:00:00.330) 0:12:53.408 ************ 2025-05-23 00:56:13.511930 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:56:13.511934 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:56:13.511939 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:56:13.511944 | orchestrator | 2025-05-23 00:56:13.511949 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-05-23 00:56:13.511953 | orchestrator | Friday 23 May 2025 00:56:09 +0000 (0:00:01.587) 0:12:54.995 ************ 2025-05-23 00:56:13.511958 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-23 00:56:13.511963 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-23 00:56:13.511971 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-23 00:56:13.511975 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:56:13.511980 | orchestrator | 2025-05-23 00:56:13.511985 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-05-23 00:56:13.511989 | orchestrator | Friday 23 May 2025 00:56:10 +0000 (0:00:00.650) 0:12:55.645 ************ 2025-05-23 00:56:13.511994 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:56:13.511999 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:56:13.512003 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:56:13.512008 | orchestrator | 2025-05-23 00:56:13.512013 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-23 00:56:13.512017 | orchestrator | Friday 23 May 2025 00:56:10 +0000 (0:00:00.420) 0:12:56.066 ************ 2025-05-23 00:56:13.512022 | orchestrator | changed: [testbed-node-3] 2025-05-23 00:56:13.512027 | orchestrator | changed: [testbed-node-4] 2025-05-23 00:56:13.512031 | orchestrator | changed: [testbed-node-5] 2025-05-23 00:56:13.512036 | orchestrator | 2025-05-23 00:56:13.512041 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 00:56:13.512046 | orchestrator | testbed-node-0 : ok=131  changed=38  unreachable=0 failed=0 skipped=291  rescued=0 ignored=0 2025-05-23 00:56:13.512051 | orchestrator | testbed-node-1 : ok=119  changed=34  unreachable=0 failed=0 skipped=262  rescued=0 ignored=0 2025-05-23 00:56:13.512056 | orchestrator | testbed-node-2 : ok=126  changed=36  unreachable=0 failed=0 skipped=261  rescued=0 ignored=0 2025-05-23 00:56:13.512061 | orchestrator | testbed-node-3 : ok=175  changed=47  unreachable=0 failed=0 skipped=347  rescued=0 ignored=0 2025-05-23 00:56:13.512066 | orchestrator | testbed-node-4 : ok=164  changed=43  unreachable=0 failed=0 skipped=309  rescued=0 ignored=0 2025-05-23 00:56:13.512070 | orchestrator | testbed-node-5 : ok=166  changed=44  unreachable=0 failed=0 skipped=307  rescued=0 ignored=0 2025-05-23 00:56:13.512075 | orchestrator | 2025-05-23 00:56:13.512080 | orchestrator | 2025-05-23 00:56:13.512085 | orchestrator | 2025-05-23 00:56:13.512092 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-23 00:56:13.512097 | orchestrator | Friday 23 May 2025 00:56:11 +0000 (0:00:01.364) 0:12:57.430 ************ 2025-05-23 00:56:13.512102 | orchestrator | =============================================================================== 2025-05-23 00:56:13.512107 | orchestrator | ceph-container-common : pulling registry.osism.tech/osism/ceph-daemon:17.2.7 image -- 48.59s 2025-05-23 00:56:13.512111 | orchestrator | ceph-osd : use ceph-volume to create bluestore osds -------------------- 39.46s 2025-05-23 00:56:13.512116 | orchestrator | ceph-rgw : create replicated pools for rgw ----------------------------- 23.88s 2025-05-23 00:56:13.512121 | orchestrator | ceph-mon : waiting for the monitor(s) to form the quorum... ------------ 21.50s 2025-05-23 00:56:13.512125 | orchestrator | ceph-mds : wait for mds socket to exist -------------------------------- 17.14s 2025-05-23 00:56:13.512130 | orchestrator | ceph-mgr : wait for all mgr to be up ----------------------------------- 13.29s 2025-05-23 00:56:13.512135 | orchestrator | ceph-osd : wait for all osd to be up ----------------------------------- 12.56s 2025-05-23 00:56:13.512139 | orchestrator | ceph-mgr : create ceph mgr keyring(s) on a mon node --------------------- 7.95s 2025-05-23 00:56:13.512144 | orchestrator | ceph-mds : create filesystem pools -------------------------------------- 7.51s 2025-05-23 00:56:13.512149 | orchestrator | ceph-mon : fetch ceph initial keys -------------------------------------- 7.44s 2025-05-23 00:56:13.512153 | orchestrator | ceph-config : create ceph initial directories --------------------------- 6.30s 2025-05-23 00:56:13.512158 | orchestrator | ceph-mgr : disable ceph mgr enabled modules ----------------------------- 6.12s 2025-05-23 00:56:13.512163 | orchestrator | ceph-mgr : add modules to ceph-mgr -------------------------------------- 4.94s 2025-05-23 00:56:13.512167 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 4.85s 2025-05-23 00:56:13.512172 | orchestrator | ceph-config : generate ceph.conf configuration file --------------------- 4.44s 2025-05-23 00:56:13.512177 | orchestrator | ceph-crash : start the ceph-crash service ------------------------------- 4.15s 2025-05-23 00:56:13.512181 | orchestrator | ceph-handler : remove tempdir for scripts ------------------------------- 3.58s 2025-05-23 00:56:13.512188 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 3.41s 2025-05-23 00:56:13.512193 | orchestrator | ceph-osd : systemd start osd -------------------------------------------- 3.34s 2025-05-23 00:56:13.512198 | orchestrator | ceph-crash : create client.crash keyring -------------------------------- 3.34s 2025-05-23 00:56:13.512203 | orchestrator | 2025-05-23 00:56:13 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:56:16.529744 | orchestrator | 2025-05-23 00:56:16 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:56:16.530215 | orchestrator | 2025-05-23 00:56:16 | INFO  | Task ba588c27-023a-4643-93d0-6b669301227f is in state STARTED 2025-05-23 00:56:16.530712 | orchestrator | 2025-05-23 00:56:16 | INFO  | Task 886af4ce-fcd8-4908-b58a-50bf11549a47 is in state STARTED 2025-05-23 00:56:16.531769 | orchestrator | 2025-05-23 00:56:16 | INFO  | Task 0cd617fa-b1f4-4bd2-8627-6ab0fa7b48bb is in state STARTED 2025-05-23 00:56:16.531791 | orchestrator | 2025-05-23 00:56:16 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:56:19.583808 | orchestrator | 2025-05-23 00:56:19 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:56:19.585742 | orchestrator | 2025-05-23 00:56:19 | INFO  | Task ba588c27-023a-4643-93d0-6b669301227f is in state STARTED 2025-05-23 00:56:19.587643 | orchestrator | 2025-05-23 00:56:19 | INFO  | Task 886af4ce-fcd8-4908-b58a-50bf11549a47 is in state STARTED 2025-05-23 00:56:19.589489 | orchestrator | 2025-05-23 00:56:19 | INFO  | Task 0cd617fa-b1f4-4bd2-8627-6ab0fa7b48bb is in state STARTED 2025-05-23 00:56:19.589540 | orchestrator | 2025-05-23 00:56:19 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:56:22.626873 | orchestrator | 2025-05-23 00:56:22 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:56:22.628683 | orchestrator | 2025-05-23 00:56:22 | INFO  | Task ba588c27-023a-4643-93d0-6b669301227f is in state STARTED 2025-05-23 00:56:22.629227 | orchestrator | 2025-05-23 00:56:22 | INFO  | Task 886af4ce-fcd8-4908-b58a-50bf11549a47 is in state STARTED 2025-05-23 00:56:22.632550 | orchestrator | 2025-05-23 00:56:22 | INFO  | Task 0cd617fa-b1f4-4bd2-8627-6ab0fa7b48bb is in state STARTED 2025-05-23 00:56:22.632582 | orchestrator | 2025-05-23 00:56:22 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:56:25.678989 | orchestrator | 2025-05-23 00:56:25 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:56:25.679104 | orchestrator | 2025-05-23 00:56:25 | INFO  | Task ba588c27-023a-4643-93d0-6b669301227f is in state STARTED 2025-05-23 00:56:25.679542 | orchestrator | 2025-05-23 00:56:25 | INFO  | Task 886af4ce-fcd8-4908-b58a-50bf11549a47 is in state STARTED 2025-05-23 00:56:25.680490 | orchestrator | 2025-05-23 00:56:25 | INFO  | Task 0cd617fa-b1f4-4bd2-8627-6ab0fa7b48bb is in state STARTED 2025-05-23 00:56:25.680528 | orchestrator | 2025-05-23 00:56:25 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:56:28.728170 | orchestrator | 2025-05-23 00:56:28 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:56:28.728271 | orchestrator | 2025-05-23 00:56:28 | INFO  | Task ba588c27-023a-4643-93d0-6b669301227f is in state STARTED 2025-05-23 00:56:28.728285 | orchestrator | 2025-05-23 00:56:28 | INFO  | Task 886af4ce-fcd8-4908-b58a-50bf11549a47 is in state STARTED 2025-05-23 00:56:28.729729 | orchestrator | 2025-05-23 00:56:28 | INFO  | Task 0cd617fa-b1f4-4bd2-8627-6ab0fa7b48bb is in state STARTED 2025-05-23 00:56:28.729753 | orchestrator | 2025-05-23 00:56:28 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:56:31.769218 | orchestrator | 2025-05-23 00:56:31 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:56:31.769753 | orchestrator | 2025-05-23 00:56:31 | INFO  | Task ba588c27-023a-4643-93d0-6b669301227f is in state STARTED 2025-05-23 00:56:31.770888 | orchestrator | 2025-05-23 00:56:31 | INFO  | Task 886af4ce-fcd8-4908-b58a-50bf11549a47 is in state STARTED 2025-05-23 00:56:31.771439 | orchestrator | 2025-05-23 00:56:31 | INFO  | Task 0cd617fa-b1f4-4bd2-8627-6ab0fa7b48bb is in state STARTED 2025-05-23 00:56:31.771463 | orchestrator | 2025-05-23 00:56:31 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:56:34.818144 | orchestrator | 2025-05-23 00:56:34 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:56:34.818406 | orchestrator | 2025-05-23 00:56:34 | INFO  | Task ba588c27-023a-4643-93d0-6b669301227f is in state STARTED 2025-05-23 00:56:34.818453 | orchestrator | 2025-05-23 00:56:34 | INFO  | Task 886af4ce-fcd8-4908-b58a-50bf11549a47 is in state STARTED 2025-05-23 00:56:34.819342 | orchestrator | 2025-05-23 00:56:34 | INFO  | Task 0cd617fa-b1f4-4bd2-8627-6ab0fa7b48bb is in state STARTED 2025-05-23 00:56:34.819402 | orchestrator | 2025-05-23 00:56:34 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:56:37.847550 | orchestrator | 2025-05-23 00:56:37 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:56:37.847661 | orchestrator | 2025-05-23 00:56:37 | INFO  | Task ba588c27-023a-4643-93d0-6b669301227f is in state STARTED 2025-05-23 00:56:37.849312 | orchestrator | 2025-05-23 00:56:37 | INFO  | Task 886af4ce-fcd8-4908-b58a-50bf11549a47 is in state STARTED 2025-05-23 00:56:37.849442 | orchestrator | 2025-05-23 00:56:37 | INFO  | Task 0cd617fa-b1f4-4bd2-8627-6ab0fa7b48bb is in state STARTED 2025-05-23 00:56:37.849517 | orchestrator | 2025-05-23 00:56:37 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:56:40.893640 | orchestrator | 2025-05-23 00:56:40 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:56:40.893975 | orchestrator | 2025-05-23 00:56:40 | INFO  | Task ba588c27-023a-4643-93d0-6b669301227f is in state STARTED 2025-05-23 00:56:40.894673 | orchestrator | 2025-05-23 00:56:40 | INFO  | Task 886af4ce-fcd8-4908-b58a-50bf11549a47 is in state STARTED 2025-05-23 00:56:40.899385 | orchestrator | 2025-05-23 00:56:40 | INFO  | Task 0cd617fa-b1f4-4bd2-8627-6ab0fa7b48bb is in state STARTED 2025-05-23 00:56:40.899427 | orchestrator | 2025-05-23 00:56:40 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:56:43.948662 | orchestrator | 2025-05-23 00:56:43 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:56:43.949202 | orchestrator | 2025-05-23 00:56:43 | INFO  | Task ba588c27-023a-4643-93d0-6b669301227f is in state STARTED 2025-05-23 00:56:43.952092 | orchestrator | 2025-05-23 00:56:43 | INFO  | Task 886af4ce-fcd8-4908-b58a-50bf11549a47 is in state STARTED 2025-05-23 00:56:43.953953 | orchestrator | 2025-05-23 00:56:43 | INFO  | Task 0cd617fa-b1f4-4bd2-8627-6ab0fa7b48bb is in state STARTED 2025-05-23 00:56:43.954077 | orchestrator | 2025-05-23 00:56:43 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:56:46.981180 | orchestrator | 2025-05-23 00:56:46 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:56:46.981537 | orchestrator | 2025-05-23 00:56:46 | INFO  | Task ba588c27-023a-4643-93d0-6b669301227f is in state STARTED 2025-05-23 00:56:46.982138 | orchestrator | 2025-05-23 00:56:46 | INFO  | Task 886af4ce-fcd8-4908-b58a-50bf11549a47 is in state STARTED 2025-05-23 00:56:46.982873 | orchestrator | 2025-05-23 00:56:46 | INFO  | Task 0cd617fa-b1f4-4bd2-8627-6ab0fa7b48bb is in state STARTED 2025-05-23 00:56:46.982906 | orchestrator | 2025-05-23 00:56:46 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:56:50.026434 | orchestrator | 2025-05-23 00:56:50 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:56:50.026523 | orchestrator | 2025-05-23 00:56:50 | INFO  | Task ba588c27-023a-4643-93d0-6b669301227f is in state STARTED 2025-05-23 00:56:50.027071 | orchestrator | 2025-05-23 00:56:50 | INFO  | Task 886af4ce-fcd8-4908-b58a-50bf11549a47 is in state STARTED 2025-05-23 00:56:50.027570 | orchestrator | 2025-05-23 00:56:50 | INFO  | Task 0cd617fa-b1f4-4bd2-8627-6ab0fa7b48bb is in state STARTED 2025-05-23 00:56:50.027602 | orchestrator | 2025-05-23 00:56:50 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:56:53.055809 | orchestrator | 2025-05-23 00:56:53 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:56:53.056276 | orchestrator | 2025-05-23 00:56:53 | INFO  | Task ba588c27-023a-4643-93d0-6b669301227f is in state STARTED 2025-05-23 00:56:53.057297 | orchestrator | 2025-05-23 00:56:53 | INFO  | Task 886af4ce-fcd8-4908-b58a-50bf11549a47 is in state STARTED 2025-05-23 00:56:53.057326 | orchestrator | 2025-05-23 00:56:53 | INFO  | Task 0cd617fa-b1f4-4bd2-8627-6ab0fa7b48bb is in state STARTED 2025-05-23 00:56:53.057982 | orchestrator | 2025-05-23 00:56:53 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:56:56.097030 | orchestrator | 2025-05-23 00:56:56 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:56:56.097703 | orchestrator | 2025-05-23 00:56:56 | INFO  | Task ba588c27-023a-4643-93d0-6b669301227f is in state STARTED 2025-05-23 00:56:56.098873 | orchestrator | 2025-05-23 00:56:56 | INFO  | Task 886af4ce-fcd8-4908-b58a-50bf11549a47 is in state STARTED 2025-05-23 00:56:56.100489 | orchestrator | 2025-05-23 00:56:56 | INFO  | Task 0cd617fa-b1f4-4bd2-8627-6ab0fa7b48bb is in state STARTED 2025-05-23 00:56:56.100515 | orchestrator | 2025-05-23 00:56:56 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:56:59.152780 | orchestrator | 2025-05-23 00:56:59 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:56:59.154760 | orchestrator | 2025-05-23 00:56:59 | INFO  | Task ba588c27-023a-4643-93d0-6b669301227f is in state STARTED 2025-05-23 00:56:59.157142 | orchestrator | 2025-05-23 00:56:59 | INFO  | Task 886af4ce-fcd8-4908-b58a-50bf11549a47 is in state STARTED 2025-05-23 00:56:59.160023 | orchestrator | 2025-05-23 00:56:59 | INFO  | Task 0cd617fa-b1f4-4bd2-8627-6ab0fa7b48bb is in state STARTED 2025-05-23 00:56:59.160267 | orchestrator | 2025-05-23 00:56:59 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:57:02.210880 | orchestrator | 2025-05-23 00:57:02 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:57:02.210985 | orchestrator | 2025-05-23 00:57:02 | INFO  | Task ba588c27-023a-4643-93d0-6b669301227f is in state STARTED 2025-05-23 00:57:02.212683 | orchestrator | 2025-05-23 00:57:02 | INFO  | Task 886af4ce-fcd8-4908-b58a-50bf11549a47 is in state STARTED 2025-05-23 00:57:02.214542 | orchestrator | 2025-05-23 00:57:02 | INFO  | Task 0cd617fa-b1f4-4bd2-8627-6ab0fa7b48bb is in state STARTED 2025-05-23 00:57:02.214927 | orchestrator | 2025-05-23 00:57:02 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:57:05.265251 | orchestrator | 2025-05-23 00:57:05 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:57:05.265814 | orchestrator | 2025-05-23 00:57:05 | INFO  | Task ba588c27-023a-4643-93d0-6b669301227f is in state STARTED 2025-05-23 00:57:05.266864 | orchestrator | 2025-05-23 00:57:05 | INFO  | Task 886af4ce-fcd8-4908-b58a-50bf11549a47 is in state STARTED 2025-05-23 00:57:05.268270 | orchestrator | 2025-05-23 00:57:05 | INFO  | Task 0cd617fa-b1f4-4bd2-8627-6ab0fa7b48bb is in state STARTED 2025-05-23 00:57:05.268294 | orchestrator | 2025-05-23 00:57:05 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:57:08.323820 | orchestrator | 2025-05-23 00:57:08 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:57:08.326907 | orchestrator | 2025-05-23 00:57:08 | INFO  | Task ba588c27-023a-4643-93d0-6b669301227f is in state STARTED 2025-05-23 00:57:08.329026 | orchestrator | 2025-05-23 00:57:08 | INFO  | Task 886af4ce-fcd8-4908-b58a-50bf11549a47 is in state STARTED 2025-05-23 00:57:08.334866 | orchestrator | 2025-05-23 00:57:08 | INFO  | Task 0cd617fa-b1f4-4bd2-8627-6ab0fa7b48bb is in state STARTED 2025-05-23 00:57:08.335011 | orchestrator | 2025-05-23 00:57:08 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:57:11.384122 | orchestrator | 2025-05-23 00:57:11 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:57:11.386010 | orchestrator | 2025-05-23 00:57:11 | INFO  | Task ba588c27-023a-4643-93d0-6b669301227f is in state STARTED 2025-05-23 00:57:11.387651 | orchestrator | 2025-05-23 00:57:11 | INFO  | Task 886af4ce-fcd8-4908-b58a-50bf11549a47 is in state STARTED 2025-05-23 00:57:11.390823 | orchestrator | 2025-05-23 00:57:11 | INFO  | Task 0cd617fa-b1f4-4bd2-8627-6ab0fa7b48bb is in state STARTED 2025-05-23 00:57:11.390856 | orchestrator | 2025-05-23 00:57:11 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:57:14.439067 | orchestrator | 2025-05-23 00:57:14 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:57:14.439409 | orchestrator | 2025-05-23 00:57:14 | INFO  | Task ba588c27-023a-4643-93d0-6b669301227f is in state STARTED 2025-05-23 00:57:14.440603 | orchestrator | 2025-05-23 00:57:14 | INFO  | Task 886af4ce-fcd8-4908-b58a-50bf11549a47 is in state STARTED 2025-05-23 00:57:14.441061 | orchestrator | 2025-05-23 00:57:14 | INFO  | Task 0cd617fa-b1f4-4bd2-8627-6ab0fa7b48bb is in state STARTED 2025-05-23 00:57:14.441087 | orchestrator | 2025-05-23 00:57:14 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:57:17.491951 | orchestrator | 2025-05-23 00:57:17 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:57:17.494696 | orchestrator | 2025-05-23 00:57:17 | INFO  | Task ba588c27-023a-4643-93d0-6b669301227f is in state STARTED 2025-05-23 00:57:17.495939 | orchestrator | 2025-05-23 00:57:17 | INFO  | Task 886af4ce-fcd8-4908-b58a-50bf11549a47 is in state STARTED 2025-05-23 00:57:17.497943 | orchestrator | 2025-05-23 00:57:17 | INFO  | Task 0cd617fa-b1f4-4bd2-8627-6ab0fa7b48bb is in state STARTED 2025-05-23 00:57:17.498087 | orchestrator | 2025-05-23 00:57:17 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:57:20.555985 | orchestrator | 2025-05-23 00:57:20 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:57:20.558250 | orchestrator | 2025-05-23 00:57:20 | INFO  | Task ba588c27-023a-4643-93d0-6b669301227f is in state STARTED 2025-05-23 00:57:20.560437 | orchestrator | 2025-05-23 00:57:20 | INFO  | Task 886af4ce-fcd8-4908-b58a-50bf11549a47 is in state STARTED 2025-05-23 00:57:20.561987 | orchestrator | 2025-05-23 00:57:20 | INFO  | Task 0cd617fa-b1f4-4bd2-8627-6ab0fa7b48bb is in state STARTED 2025-05-23 00:57:20.562015 | orchestrator | 2025-05-23 00:57:20 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:57:23.629852 | orchestrator | 2025-05-23 00:57:23 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:57:23.631081 | orchestrator | 2025-05-23 00:57:23 | INFO  | Task ba588c27-023a-4643-93d0-6b669301227f is in state STARTED 2025-05-23 00:57:23.633029 | orchestrator | 2025-05-23 00:57:23 | INFO  | Task 886af4ce-fcd8-4908-b58a-50bf11549a47 is in state STARTED 2025-05-23 00:57:23.635873 | orchestrator | 2025-05-23 00:57:23 | INFO  | Task 0cd617fa-b1f4-4bd2-8627-6ab0fa7b48bb is in state STARTED 2025-05-23 00:57:23.635971 | orchestrator | 2025-05-23 00:57:23 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:57:26.685146 | orchestrator | 2025-05-23 00:57:26 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:57:26.685927 | orchestrator | 2025-05-23 00:57:26 | INFO  | Task ba588c27-023a-4643-93d0-6b669301227f is in state STARTED 2025-05-23 00:57:26.688011 | orchestrator | 2025-05-23 00:57:26 | INFO  | Task 886af4ce-fcd8-4908-b58a-50bf11549a47 is in state STARTED 2025-05-23 00:57:26.689960 | orchestrator | 2025-05-23 00:57:26 | INFO  | Task 0cd617fa-b1f4-4bd2-8627-6ab0fa7b48bb is in state STARTED 2025-05-23 00:57:26.689993 | orchestrator | 2025-05-23 00:57:26 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:57:29.734385 | orchestrator | 2025-05-23 00:57:29 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:57:29.736760 | orchestrator | 2025-05-23 00:57:29 | INFO  | Task ba588c27-023a-4643-93d0-6b669301227f is in state STARTED 2025-05-23 00:57:29.737940 | orchestrator | 2025-05-23 00:57:29 | INFO  | Task 886af4ce-fcd8-4908-b58a-50bf11549a47 is in state STARTED 2025-05-23 00:57:29.738990 | orchestrator | 2025-05-23 00:57:29 | INFO  | Task 0cd617fa-b1f4-4bd2-8627-6ab0fa7b48bb is in state STARTED 2025-05-23 00:57:29.739047 | orchestrator | 2025-05-23 00:57:29 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:57:32.801677 | orchestrator | 2025-05-23 00:57:32 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:57:32.803430 | orchestrator | 2025-05-23 00:57:32 | INFO  | Task ba588c27-023a-4643-93d0-6b669301227f is in state STARTED 2025-05-23 00:57:32.804705 | orchestrator | 2025-05-23 00:57:32 | INFO  | Task 886af4ce-fcd8-4908-b58a-50bf11549a47 is in state STARTED 2025-05-23 00:57:32.806083 | orchestrator | 2025-05-23 00:57:32 | INFO  | Task 0cd617fa-b1f4-4bd2-8627-6ab0fa7b48bb is in state STARTED 2025-05-23 00:57:32.806300 | orchestrator | 2025-05-23 00:57:32 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:57:35.859905 | orchestrator | 2025-05-23 00:57:35 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:57:35.862165 | orchestrator | 2025-05-23 00:57:35 | INFO  | Task ba588c27-023a-4643-93d0-6b669301227f is in state STARTED 2025-05-23 00:57:35.864540 | orchestrator | 2025-05-23 00:57:35 | INFO  | Task 886af4ce-fcd8-4908-b58a-50bf11549a47 is in state STARTED 2025-05-23 00:57:35.866480 | orchestrator | 2025-05-23 00:57:35 | INFO  | Task 0cd617fa-b1f4-4bd2-8627-6ab0fa7b48bb is in state STARTED 2025-05-23 00:57:35.866524 | orchestrator | 2025-05-23 00:57:35 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:57:38.927555 | orchestrator | 2025-05-23 00:57:38 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:57:38.927657 | orchestrator | 2025-05-23 00:57:38 | INFO  | Task ba588c27-023a-4643-93d0-6b669301227f is in state STARTED 2025-05-23 00:57:38.928192 | orchestrator | 2025-05-23 00:57:38 | INFO  | Task 886af4ce-fcd8-4908-b58a-50bf11549a47 is in state STARTED 2025-05-23 00:57:38.931878 | orchestrator | 2025-05-23 00:57:38 | INFO  | Task 0cd617fa-b1f4-4bd2-8627-6ab0fa7b48bb is in state STARTED 2025-05-23 00:57:38.931998 | orchestrator | 2025-05-23 00:57:38 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:57:41.982371 | orchestrator | 2025-05-23 00:57:41 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:57:41.985175 | orchestrator | 2025-05-23 00:57:41 | INFO  | Task ba588c27-023a-4643-93d0-6b669301227f is in state STARTED 2025-05-23 00:57:41.986773 | orchestrator | 2025-05-23 00:57:41 | INFO  | Task 886af4ce-fcd8-4908-b58a-50bf11549a47 is in state STARTED 2025-05-23 00:57:41.988936 | orchestrator | 2025-05-23 00:57:41 | INFO  | Task 0cd617fa-b1f4-4bd2-8627-6ab0fa7b48bb is in state STARTED 2025-05-23 00:57:41.988962 | orchestrator | 2025-05-23 00:57:41 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:57:45.044103 | orchestrator | 2025-05-23 00:57:45 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:57:45.046760 | orchestrator | 2025-05-23 00:57:45 | INFO  | Task ba588c27-023a-4643-93d0-6b669301227f is in state STARTED 2025-05-23 00:57:45.048558 | orchestrator | 2025-05-23 00:57:45 | INFO  | Task 886af4ce-fcd8-4908-b58a-50bf11549a47 is in state STARTED 2025-05-23 00:57:45.050801 | orchestrator | 2025-05-23 00:57:45 | INFO  | Task 0cd617fa-b1f4-4bd2-8627-6ab0fa7b48bb is in state STARTED 2025-05-23 00:57:45.050840 | orchestrator | 2025-05-23 00:57:45 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:57:48.115986 | orchestrator | 2025-05-23 00:57:48 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:57:48.116987 | orchestrator | 2025-05-23 00:57:48 | INFO  | Task ba588c27-023a-4643-93d0-6b669301227f is in state STARTED 2025-05-23 00:57:48.117869 | orchestrator | 2025-05-23 00:57:48 | INFO  | Task 886af4ce-fcd8-4908-b58a-50bf11549a47 is in state STARTED 2025-05-23 00:57:48.119463 | orchestrator | 2025-05-23 00:57:48 | INFO  | Task 0cd617fa-b1f4-4bd2-8627-6ab0fa7b48bb is in state STARTED 2025-05-23 00:57:48.119488 | orchestrator | 2025-05-23 00:57:48 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:57:51.174549 | orchestrator | 2025-05-23 00:57:51 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:57:51.176468 | orchestrator | 2025-05-23 00:57:51 | INFO  | Task ba588c27-023a-4643-93d0-6b669301227f is in state STARTED 2025-05-23 00:57:51.180155 | orchestrator | 2025-05-23 00:57:51 | INFO  | Task 886af4ce-fcd8-4908-b58a-50bf11549a47 is in state STARTED 2025-05-23 00:57:51.181141 | orchestrator | 2025-05-23 00:57:51 | INFO  | Task 0cd617fa-b1f4-4bd2-8627-6ab0fa7b48bb is in state STARTED 2025-05-23 00:57:51.181168 | orchestrator | 2025-05-23 00:57:51 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:57:54.248180 | orchestrator | 2025-05-23 00:57:54 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:57:54.250117 | orchestrator | 2025-05-23 00:57:54 | INFO  | Task ba588c27-023a-4643-93d0-6b669301227f is in state STARTED 2025-05-23 00:57:54.251658 | orchestrator | 2025-05-23 00:57:54 | INFO  | Task 886af4ce-fcd8-4908-b58a-50bf11549a47 is in state STARTED 2025-05-23 00:57:54.253047 | orchestrator | 2025-05-23 00:57:54 | INFO  | Task 0cd617fa-b1f4-4bd2-8627-6ab0fa7b48bb is in state STARTED 2025-05-23 00:57:54.253085 | orchestrator | 2025-05-23 00:57:54 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:57:57.307948 | orchestrator | 2025-05-23 00:57:57 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:57:57.309871 | orchestrator | 2025-05-23 00:57:57 | INFO  | Task ba588c27-023a-4643-93d0-6b669301227f is in state STARTED 2025-05-23 00:57:57.312074 | orchestrator | 2025-05-23 00:57:57 | INFO  | Task 886af4ce-fcd8-4908-b58a-50bf11549a47 is in state STARTED 2025-05-23 00:57:57.313809 | orchestrator | 2025-05-23 00:57:57 | INFO  | Task 0cd617fa-b1f4-4bd2-8627-6ab0fa7b48bb is in state STARTED 2025-05-23 00:57:57.313864 | orchestrator | 2025-05-23 00:57:57 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:58:00.388399 | orchestrator | 2025-05-23 00:58:00 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:58:00.389924 | orchestrator | 2025-05-23 00:58:00 | INFO  | Task ba588c27-023a-4643-93d0-6b669301227f is in state STARTED 2025-05-23 00:58:00.392146 | orchestrator | 2025-05-23 00:58:00 | INFO  | Task 886af4ce-fcd8-4908-b58a-50bf11549a47 is in state SUCCESS 2025-05-23 00:58:00.394776 | orchestrator | 2025-05-23 00:58:00.394812 | orchestrator | 2025-05-23 00:58:00.394824 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-23 00:58:00.394837 | orchestrator | 2025-05-23 00:58:00.394848 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-23 00:58:00.394860 | orchestrator | Friday 23 May 2025 00:56:15 +0000 (0:00:00.347) 0:00:00.347 ************ 2025-05-23 00:58:00.394872 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:58:00.394884 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:58:00.394894 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:58:00.394905 | orchestrator | 2025-05-23 00:58:00.394916 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-23 00:58:00.394927 | orchestrator | Friday 23 May 2025 00:56:16 +0000 (0:00:00.426) 0:00:00.774 ************ 2025-05-23 00:58:00.394938 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-05-23 00:58:00.394949 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-05-23 00:58:00.394960 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-05-23 00:58:00.394997 | orchestrator | 2025-05-23 00:58:00.395009 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-05-23 00:58:00.395020 | orchestrator | 2025-05-23 00:58:00.395031 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-23 00:58:00.395042 | orchestrator | Friday 23 May 2025 00:56:16 +0000 (0:00:00.329) 0:00:01.104 ************ 2025-05-23 00:58:00.395067 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:58:00.395079 | orchestrator | 2025-05-23 00:58:00.395161 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-05-23 00:58:00.395175 | orchestrator | Friday 23 May 2025 00:56:17 +0000 (0:00:00.850) 0:00:01.955 ************ 2025-05-23 00:58:00.395192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-23 00:58:00.395232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-23 00:58:00.395257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-23 00:58:00.395269 | orchestrator | 2025-05-23 00:58:00.395318 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-05-23 00:58:00.395329 | orchestrator | Friday 23 May 2025 00:56:19 +0000 (0:00:01.965) 0:00:03.920 ************ 2025-05-23 00:58:00.395341 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:58:00.395351 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:58:00.395362 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:58:00.395693 | orchestrator | 2025-05-23 00:58:00.395713 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-23 00:58:00.395727 | orchestrator | Friday 23 May 2025 00:56:19 +0000 (0:00:00.301) 0:00:04.222 ************ 2025-05-23 00:58:00.395748 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-23 00:58:00.395769 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-05-23 00:58:00.395780 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-05-23 00:58:00.395791 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-05-23 00:58:00.395801 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-05-23 00:58:00.395812 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-05-23 00:58:00.395823 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-05-23 00:58:00.395833 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-23 00:58:00.395844 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-05-23 00:58:00.395854 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-05-23 00:58:00.395865 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-05-23 00:58:00.395876 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-05-23 00:58:00.395893 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-05-23 00:58:00.395904 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-23 00:58:00.395915 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-05-23 00:58:00.395926 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-05-23 00:58:00.395937 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-05-23 00:58:00.395947 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-05-23 00:58:00.395958 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-05-23 00:58:00.395968 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-05-23 00:58:00.395979 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-05-23 00:58:00.395991 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-05-23 00:58:00.396004 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-05-23 00:58:00.396014 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-05-23 00:58:00.396025 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-05-23 00:58:00.396036 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'heat', 'enabled': True}) 2025-05-23 00:58:00.396048 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-05-23 00:58:00.396059 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-05-23 00:58:00.396069 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-05-23 00:58:00.396080 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-05-23 00:58:00.396097 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-05-23 00:58:00.396108 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-05-23 00:58:00.396118 | orchestrator | 2025-05-23 00:58:00.396129 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-23 00:58:00.396140 | orchestrator | Friday 23 May 2025 00:56:20 +0000 (0:00:00.932) 0:00:05.155 ************ 2025-05-23 00:58:00.396151 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:58:00.396161 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:58:00.396172 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:58:00.396183 | orchestrator | 2025-05-23 00:58:00.396193 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-23 00:58:00.396204 | orchestrator | Friday 23 May 2025 00:56:20 +0000 (0:00:00.434) 0:00:05.589 ************ 2025-05-23 00:58:00.396215 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:58:00.396226 | orchestrator | 2025-05-23 00:58:00.396244 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-23 00:58:00.396255 | orchestrator | Friday 23 May 2025 00:56:21 +0000 (0:00:00.121) 0:00:05.711 ************ 2025-05-23 00:58:00.396266 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:58:00.396277 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:58:00.396309 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:58:00.396320 | orchestrator | 2025-05-23 00:58:00.396330 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-23 00:58:00.396341 | orchestrator | Friday 23 May 2025 00:56:21 +0000 (0:00:00.468) 0:00:06.179 ************ 2025-05-23 00:58:00.396352 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:58:00.396363 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:58:00.396373 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:58:00.396384 | orchestrator | 2025-05-23 00:58:00.396394 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-23 00:58:00.396405 | orchestrator | Friday 23 May 2025 00:56:21 +0000 (0:00:00.321) 0:00:06.501 ************ 2025-05-23 00:58:00.396416 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:58:00.396426 | orchestrator | 2025-05-23 00:58:00.396437 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-23 00:58:00.396448 | orchestrator | Friday 23 May 2025 00:56:22 +0000 (0:00:00.099) 0:00:06.600 ************ 2025-05-23 00:58:00.396458 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:58:00.396469 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:58:00.396480 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:58:00.396491 | orchestrator | 2025-05-23 00:58:00.396506 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-23 00:58:00.396517 | orchestrator | Friday 23 May 2025 00:56:22 +0000 (0:00:00.490) 0:00:07.091 ************ 2025-05-23 00:58:00.396528 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:58:00.396538 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:58:00.396549 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:58:00.396560 | orchestrator | 2025-05-23 00:58:00.396570 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-23 00:58:00.396581 | orchestrator | Friday 23 May 2025 00:56:23 +0000 (0:00:00.576) 0:00:07.668 ************ 2025-05-23 00:58:00.396591 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:58:00.396602 | orchestrator | 2025-05-23 00:58:00.396613 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-23 00:58:00.396624 | orchestrator | Friday 23 May 2025 00:56:23 +0000 (0:00:00.175) 0:00:07.844 ************ 2025-05-23 00:58:00.396634 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:58:00.396645 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:58:00.396655 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:58:00.396666 | orchestrator | 2025-05-23 00:58:00.396677 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-23 00:58:00.396695 | orchestrator | Friday 23 May 2025 00:56:23 +0000 (0:00:00.488) 0:00:08.332 ************ 2025-05-23 00:58:00.396706 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:58:00.396717 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:58:00.396727 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:58:00.396738 | orchestrator | 2025-05-23 00:58:00.396748 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-23 00:58:00.396759 | orchestrator | Friday 23 May 2025 00:56:24 +0000 (0:00:00.471) 0:00:08.804 ************ 2025-05-23 00:58:00.396770 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:58:00.396780 | orchestrator | 2025-05-23 00:58:00.396791 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-23 00:58:00.396801 | orchestrator | Friday 23 May 2025 00:56:24 +0000 (0:00:00.135) 0:00:08.939 ************ 2025-05-23 00:58:00.396812 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:58:00.396823 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:58:00.396833 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:58:00.396844 | orchestrator | 2025-05-23 00:58:00.396854 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-23 00:58:00.396865 | orchestrator | Friday 23 May 2025 00:56:24 +0000 (0:00:00.464) 0:00:09.403 ************ 2025-05-23 00:58:00.396876 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:58:00.396887 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:58:00.396897 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:58:00.396908 | orchestrator | 2025-05-23 00:58:00.396918 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-23 00:58:00.396929 | orchestrator | Friday 23 May 2025 00:56:25 +0000 (0:00:00.321) 0:00:09.725 ************ 2025-05-23 00:58:00.396939 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:58:00.396950 | orchestrator | 2025-05-23 00:58:00.396961 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-23 00:58:00.396971 | orchestrator | Friday 23 May 2025 00:56:25 +0000 (0:00:00.308) 0:00:10.033 ************ 2025-05-23 00:58:00.396982 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:58:00.396992 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:58:00.397003 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:58:00.397014 | orchestrator | 2025-05-23 00:58:00.397024 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-23 00:58:00.397035 | orchestrator | Friday 23 May 2025 00:56:25 +0000 (0:00:00.279) 0:00:10.312 ************ 2025-05-23 00:58:00.397045 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:58:00.397056 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:58:00.397066 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:58:00.397077 | orchestrator | 2025-05-23 00:58:00.397087 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-23 00:58:00.397098 | orchestrator | Friday 23 May 2025 00:56:26 +0000 (0:00:00.532) 0:00:10.845 ************ 2025-05-23 00:58:00.397109 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:58:00.397119 | orchestrator | 2025-05-23 00:58:00.397130 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-23 00:58:00.397141 | orchestrator | Friday 23 May 2025 00:56:26 +0000 (0:00:00.123) 0:00:10.968 ************ 2025-05-23 00:58:00.397152 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:58:00.397162 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:58:00.397173 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:58:00.397183 | orchestrator | 2025-05-23 00:58:00.397194 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-23 00:58:00.397205 | orchestrator | Friday 23 May 2025 00:56:26 +0000 (0:00:00.541) 0:00:11.510 ************ 2025-05-23 00:58:00.397222 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:58:00.397233 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:58:00.397244 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:58:00.397254 | orchestrator | 2025-05-23 00:58:00.397265 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-23 00:58:00.397312 | orchestrator | Friday 23 May 2025 00:56:27 +0000 (0:00:00.461) 0:00:11.971 ************ 2025-05-23 00:58:00.397324 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:58:00.397334 | orchestrator | 2025-05-23 00:58:00.397345 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-23 00:58:00.397355 | orchestrator | Friday 23 May 2025 00:56:27 +0000 (0:00:00.155) 0:00:12.126 ************ 2025-05-23 00:58:00.397366 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:58:00.397377 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:58:00.397387 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:58:00.397398 | orchestrator | 2025-05-23 00:58:00.397408 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-23 00:58:00.397419 | orchestrator | Friday 23 May 2025 00:56:28 +0000 (0:00:00.536) 0:00:12.663 ************ 2025-05-23 00:58:00.397429 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:58:00.397440 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:58:00.397451 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:58:00.397461 | orchestrator | 2025-05-23 00:58:00.397472 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-23 00:58:00.397483 | orchestrator | Friday 23 May 2025 00:56:28 +0000 (0:00:00.403) 0:00:13.067 ************ 2025-05-23 00:58:00.397498 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:58:00.397509 | orchestrator | 2025-05-23 00:58:00.397520 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-23 00:58:00.397530 | orchestrator | Friday 23 May 2025 00:56:28 +0000 (0:00:00.268) 0:00:13.335 ************ 2025-05-23 00:58:00.397541 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:58:00.397552 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:58:00.397562 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:58:00.397573 | orchestrator | 2025-05-23 00:58:00.397583 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-23 00:58:00.397594 | orchestrator | Friday 23 May 2025 00:56:29 +0000 (0:00:00.294) 0:00:13.629 ************ 2025-05-23 00:58:00.397604 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:58:00.397615 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:58:00.397626 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:58:00.397636 | orchestrator | 2025-05-23 00:58:00.397647 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-23 00:58:00.397658 | orchestrator | Friday 23 May 2025 00:56:29 +0000 (0:00:00.419) 0:00:14.049 ************ 2025-05-23 00:58:00.397668 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:58:00.397679 | orchestrator | 2025-05-23 00:58:00.397689 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-23 00:58:00.397700 | orchestrator | Friday 23 May 2025 00:56:29 +0000 (0:00:00.135) 0:00:14.184 ************ 2025-05-23 00:58:00.397711 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:58:00.397721 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:58:00.397732 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:58:00.397742 | orchestrator | 2025-05-23 00:58:00.397753 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-23 00:58:00.397764 | orchestrator | Friday 23 May 2025 00:56:30 +0000 (0:00:00.442) 0:00:14.626 ************ 2025-05-23 00:58:00.397774 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:58:00.397785 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:58:00.397796 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:58:00.397806 | orchestrator | 2025-05-23 00:58:00.397817 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-23 00:58:00.397828 | orchestrator | Friday 23 May 2025 00:56:30 +0000 (0:00:00.503) 0:00:15.130 ************ 2025-05-23 00:58:00.397839 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:58:00.397849 | orchestrator | 2025-05-23 00:58:00.397860 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-23 00:58:00.397871 | orchestrator | Friday 23 May 2025 00:56:30 +0000 (0:00:00.136) 0:00:15.267 ************ 2025-05-23 00:58:00.397881 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:58:00.397899 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:58:00.397910 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:58:00.397920 | orchestrator | 2025-05-23 00:58:00.397931 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-23 00:58:00.397942 | orchestrator | Friday 23 May 2025 00:56:31 +0000 (0:00:00.515) 0:00:15.783 ************ 2025-05-23 00:58:00.397952 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:58:00.397963 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:58:00.397973 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:58:00.397984 | orchestrator | 2025-05-23 00:58:00.397995 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-23 00:58:00.398005 | orchestrator | Friday 23 May 2025 00:56:31 +0000 (0:00:00.633) 0:00:16.416 ************ 2025-05-23 00:58:00.398067 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:58:00.398081 | orchestrator | 2025-05-23 00:58:00.398092 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-23 00:58:00.398103 | orchestrator | Friday 23 May 2025 00:56:31 +0000 (0:00:00.145) 0:00:16.562 ************ 2025-05-23 00:58:00.398113 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:58:00.398124 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:58:00.398135 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:58:00.398145 | orchestrator | 2025-05-23 00:58:00.398156 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-05-23 00:58:00.398167 | orchestrator | Friday 23 May 2025 00:56:32 +0000 (0:00:00.458) 0:00:17.020 ************ 2025-05-23 00:58:00.398177 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:58:00.398188 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:58:00.398198 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:58:00.398209 | orchestrator | 2025-05-23 00:58:00.398220 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-05-23 00:58:00.398230 | orchestrator | Friday 23 May 2025 00:56:35 +0000 (0:00:03.097) 0:00:20.117 ************ 2025-05-23 00:58:00.398241 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-23 00:58:00.398258 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-23 00:58:00.398270 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-23 00:58:00.398334 | orchestrator | 2025-05-23 00:58:00.398347 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-05-23 00:58:00.398357 | orchestrator | Friday 23 May 2025 00:56:37 +0000 (0:00:02.388) 0:00:22.506 ************ 2025-05-23 00:58:00.398368 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-23 00:58:00.398379 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-23 00:58:00.398390 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-23 00:58:00.398401 | orchestrator | 2025-05-23 00:58:00.398411 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-05-23 00:58:00.398422 | orchestrator | Friday 23 May 2025 00:56:40 +0000 (0:00:02.765) 0:00:25.271 ************ 2025-05-23 00:58:00.398432 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-23 00:58:00.398443 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-23 00:58:00.398459 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-23 00:58:00.398470 | orchestrator | 2025-05-23 00:58:00.398481 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-05-23 00:58:00.398492 | orchestrator | Friday 23 May 2025 00:56:43 +0000 (0:00:02.658) 0:00:27.930 ************ 2025-05-23 00:58:00.398503 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:58:00.398513 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:58:00.398532 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:58:00.398543 | orchestrator | 2025-05-23 00:58:00.398554 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-05-23 00:58:00.398564 | orchestrator | Friday 23 May 2025 00:56:43 +0000 (0:00:00.543) 0:00:28.473 ************ 2025-05-23 00:58:00.398575 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:58:00.398585 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:58:00.398596 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:58:00.398606 | orchestrator | 2025-05-23 00:58:00.398617 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-23 00:58:00.398628 | orchestrator | Friday 23 May 2025 00:56:44 +0000 (0:00:00.460) 0:00:28.934 ************ 2025-05-23 00:58:00.398637 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:58:00.398647 | orchestrator | 2025-05-23 00:58:00.398656 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-05-23 00:58:00.398666 | orchestrator | Friday 23 May 2025 00:56:44 +0000 (0:00:00.544) 0:00:29.478 ************ 2025-05-23 00:58:00.398684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-23 00:58:00.398708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-23 00:58:00.398735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-23 00:58:00.398747 | orchestrator | 2025-05-23 00:58:00.398757 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-05-23 00:58:00.398766 | orchestrator | Friday 23 May 2025 00:56:46 +0000 (0:00:01.555) 0:00:31.033 ************ 2025-05-23 00:58:00.398777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-23 00:58:00.398793 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:58:00.398890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-23 00:58:00.398924 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:58:00.398940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-23 00:58:00.398951 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:58:00.398960 | orchestrator | 2025-05-23 00:58:00.398970 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-05-23 00:58:00.398980 | orchestrator | Friday 23 May 2025 00:56:47 +0000 (0:00:00.647) 0:00:31.681 ************ 2025-05-23 00:58:00.399006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-23 00:58:00.399024 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:58:00.399035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-23 00:58:00.399046 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:58:00.399069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-23 00:58:00.399086 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:58:00.399096 | orchestrator | 2025-05-23 00:58:00.399105 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-05-23 00:58:00.399115 | orchestrator | Friday 23 May 2025 00:56:48 +0000 (0:00:01.000) 0:00:32.681 ************ 2025-05-23 00:58:00.399130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-23 00:58:00.399147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-23 00:58:00.399171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-23 00:58:00.399188 | orchestrator | 2025-05-23 00:58:00.399198 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-23 00:58:00.399207 | orchestrator | Friday 23 May 2025 00:56:53 +0000 (0:00:05.362) 0:00:38.043 ************ 2025-05-23 00:58:00.399217 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:58:00.399226 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:58:00.399236 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:58:00.399246 | orchestrator | 2025-05-23 00:58:00.399255 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-23 00:58:00.399265 | orchestrator | Friday 23 May 2025 00:56:53 +0000 (0:00:00.446) 0:00:38.490 ************ 2025-05-23 00:58:00.399276 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:58:00.399318 | orchestrator | 2025-05-23 00:58:00.399334 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-05-23 00:58:00.399359 | orchestrator | Friday 23 May 2025 00:56:54 +0000 (0:00:00.531) 0:00:39.021 ************ 2025-05-23 00:58:00.399382 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:58:00.399397 | orchestrator | 2025-05-23 00:58:00.399412 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-05-23 00:58:00.399427 | orchestrator | Friday 23 May 2025 00:56:56 +0000 (0:00:02.360) 0:00:41.382 ************ 2025-05-23 00:58:00.399442 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:58:00.399455 | orchestrator | 2025-05-23 00:58:00.399470 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-05-23 00:58:00.399485 | orchestrator | Friday 23 May 2025 00:56:58 +0000 (0:00:02.188) 0:00:43.571 ************ 2025-05-23 00:58:00.399500 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:58:00.399512 | orchestrator | 2025-05-23 00:58:00.399526 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-23 00:58:00.399541 | orchestrator | Friday 23 May 2025 00:57:13 +0000 (0:00:14.059) 0:00:57.630 ************ 2025-05-23 00:58:00.399555 | orchestrator | 2025-05-23 00:58:00.399570 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-23 00:58:00.399586 | orchestrator | Friday 23 May 2025 00:57:13 +0000 (0:00:00.055) 0:00:57.686 ************ 2025-05-23 00:58:00.399603 | orchestrator | 2025-05-23 00:58:00.399620 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-23 00:58:00.399637 | orchestrator | Friday 23 May 2025 00:57:13 +0000 (0:00:00.192) 0:00:57.878 ************ 2025-05-23 00:58:00.399652 | orchestrator | 2025-05-23 00:58:00.399668 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-05-23 00:58:00.399683 | orchestrator | Friday 23 May 2025 00:57:13 +0000 (0:00:00.067) 0:00:57.945 ************ 2025-05-23 00:58:00.399699 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:58:00.399715 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:58:00.399731 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:58:00.399748 | orchestrator | 2025-05-23 00:58:00.399759 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 00:58:00.399770 | orchestrator | testbed-node-0 : ok=39  changed=11  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-05-23 00:58:00.399780 | orchestrator | testbed-node-1 : ok=36  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-23 00:58:00.399790 | orchestrator | testbed-node-2 : ok=36  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-23 00:58:00.399800 | orchestrator | 2025-05-23 00:58:00.399809 | orchestrator | 2025-05-23 00:58:00.399818 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-23 00:58:00.399828 | orchestrator | Friday 23 May 2025 00:57:59 +0000 (0:00:45.739) 0:01:43.685 ************ 2025-05-23 00:58:00.399847 | orchestrator | =============================================================================== 2025-05-23 00:58:00.399857 | orchestrator | horizon : Restart horizon container ------------------------------------ 45.74s 2025-05-23 00:58:00.399866 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 14.06s 2025-05-23 00:58:00.399876 | orchestrator | horizon : Deploy horizon container -------------------------------------- 5.36s 2025-05-23 00:58:00.399885 | orchestrator | horizon : Copying over config.json files for services ------------------- 3.10s 2025-05-23 00:58:00.399895 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.77s 2025-05-23 00:58:00.399904 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.66s 2025-05-23 00:58:00.399913 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.39s 2025-05-23 00:58:00.399923 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.36s 2025-05-23 00:58:00.399932 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.19s 2025-05-23 00:58:00.399942 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.97s 2025-05-23 00:58:00.399951 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.56s 2025-05-23 00:58:00.399960 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.00s 2025-05-23 00:58:00.399970 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.93s 2025-05-23 00:58:00.399988 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.85s 2025-05-23 00:58:00.399998 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.65s 2025-05-23 00:58:00.400008 | orchestrator | horizon : Update policy file name --------------------------------------- 0.63s 2025-05-23 00:58:00.400017 | orchestrator | horizon : Update policy file name --------------------------------------- 0.58s 2025-05-23 00:58:00.400027 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.54s 2025-05-23 00:58:00.400036 | orchestrator | horizon : Copying over existing policy file ----------------------------- 0.54s 2025-05-23 00:58:00.400046 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.54s 2025-05-23 00:58:00.400055 | orchestrator | 2025-05-23 00:58:00 | INFO  | Task 0cd617fa-b1f4-4bd2-8627-6ab0fa7b48bb is in state STARTED 2025-05-23 00:58:00.400065 | orchestrator | 2025-05-23 00:58:00 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:58:03.452478 | orchestrator | 2025-05-23 00:58:03 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:58:03.454635 | orchestrator | 2025-05-23 00:58:03 | INFO  | Task ba588c27-023a-4643-93d0-6b669301227f is in state STARTED 2025-05-23 00:58:03.456005 | orchestrator | 2025-05-23 00:58:03 | INFO  | Task 0cd617fa-b1f4-4bd2-8627-6ab0fa7b48bb is in state STARTED 2025-05-23 00:58:03.456032 | orchestrator | 2025-05-23 00:58:03 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:58:06.502565 | orchestrator | 2025-05-23 00:58:06 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:58:06.503830 | orchestrator | 2025-05-23 00:58:06 | INFO  | Task ba588c27-023a-4643-93d0-6b669301227f is in state STARTED 2025-05-23 00:58:06.505313 | orchestrator | 2025-05-23 00:58:06 | INFO  | Task 0cd617fa-b1f4-4bd2-8627-6ab0fa7b48bb is in state STARTED 2025-05-23 00:58:06.505366 | orchestrator | 2025-05-23 00:58:06 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:58:09.557345 | orchestrator | 2025-05-23 00:58:09 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:58:09.561148 | orchestrator | 2025-05-23 00:58:09 | INFO  | Task ba588c27-023a-4643-93d0-6b669301227f is in state STARTED 2025-05-23 00:58:09.562976 | orchestrator | 2025-05-23 00:58:09 | INFO  | Task 0cd617fa-b1f4-4bd2-8627-6ab0fa7b48bb is in state STARTED 2025-05-23 00:58:09.563852 | orchestrator | 2025-05-23 00:58:09 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:58:12.610646 | orchestrator | 2025-05-23 00:58:12 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:58:12.611712 | orchestrator | 2025-05-23 00:58:12 | INFO  | Task ba588c27-023a-4643-93d0-6b669301227f is in state STARTED 2025-05-23 00:58:12.613079 | orchestrator | 2025-05-23 00:58:12 | INFO  | Task 0cd617fa-b1f4-4bd2-8627-6ab0fa7b48bb is in state STARTED 2025-05-23 00:58:12.613116 | orchestrator | 2025-05-23 00:58:12 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:58:15.667047 | orchestrator | 2025-05-23 00:58:15 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:58:15.669149 | orchestrator | 2025-05-23 00:58:15 | INFO  | Task ba588c27-023a-4643-93d0-6b669301227f is in state STARTED 2025-05-23 00:58:15.671304 | orchestrator | 2025-05-23 00:58:15 | INFO  | Task 0cd617fa-b1f4-4bd2-8627-6ab0fa7b48bb is in state STARTED 2025-05-23 00:58:15.671338 | orchestrator | 2025-05-23 00:58:15 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:58:18.724673 | orchestrator | 2025-05-23 00:58:18 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:58:18.726365 | orchestrator | 2025-05-23 00:58:18 | INFO  | Task ba588c27-023a-4643-93d0-6b669301227f is in state STARTED 2025-05-23 00:58:18.727494 | orchestrator | 2025-05-23 00:58:18 | INFO  | Task 0cd617fa-b1f4-4bd2-8627-6ab0fa7b48bb is in state STARTED 2025-05-23 00:58:18.727532 | orchestrator | 2025-05-23 00:58:18 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:58:21.774083 | orchestrator | 2025-05-23 00:58:21 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:58:21.775513 | orchestrator | 2025-05-23 00:58:21 | INFO  | Task ba588c27-023a-4643-93d0-6b669301227f is in state STARTED 2025-05-23 00:58:21.777123 | orchestrator | 2025-05-23 00:58:21 | INFO  | Task 0cd617fa-b1f4-4bd2-8627-6ab0fa7b48bb is in state STARTED 2025-05-23 00:58:21.777152 | orchestrator | 2025-05-23 00:58:21 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:58:24.827835 | orchestrator | 2025-05-23 00:58:24 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:58:24.828360 | orchestrator | 2025-05-23 00:58:24 | INFO  | Task ba588c27-023a-4643-93d0-6b669301227f is in state STARTED 2025-05-23 00:58:24.829857 | orchestrator | 2025-05-23 00:58:24 | INFO  | Task 0cd617fa-b1f4-4bd2-8627-6ab0fa7b48bb is in state STARTED 2025-05-23 00:58:24.829896 | orchestrator | 2025-05-23 00:58:24 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:58:27.902594 | orchestrator | 2025-05-23 00:58:27 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:58:27.906338 | orchestrator | 2025-05-23 00:58:27.906446 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-23 00:58:27.906474 | orchestrator | 2025-05-23 00:58:27.906495 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-05-23 00:58:27.906513 | orchestrator | 2025-05-23 00:58:27.906524 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-05-23 00:58:27.906536 | orchestrator | Friday 23 May 2025 00:56:16 +0000 (0:00:01.111) 0:00:01.111 ************ 2025-05-23 00:58:27.906547 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 00:58:27.906560 | orchestrator | 2025-05-23 00:58:27.906571 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-05-23 00:58:27.906605 | orchestrator | Friday 23 May 2025 00:56:17 +0000 (0:00:00.555) 0:00:01.666 ************ 2025-05-23 00:58:27.906643 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-0) 2025-05-23 00:58:27.906655 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-1) 2025-05-23 00:58:27.906666 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-2) 2025-05-23 00:58:27.906677 | orchestrator | 2025-05-23 00:58:27.906687 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-05-23 00:58:27.906698 | orchestrator | Friday 23 May 2025 00:56:18 +0000 (0:00:00.917) 0:00:02.584 ************ 2025-05-23 00:58:27.906709 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 00:58:27.906720 | orchestrator | 2025-05-23 00:58:27.906730 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-05-23 00:58:27.906742 | orchestrator | Friday 23 May 2025 00:56:19 +0000 (0:00:00.800) 0:00:03.384 ************ 2025-05-23 00:58:27.906753 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:58:27.906764 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:58:27.906774 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:58:27.906785 | orchestrator | 2025-05-23 00:58:27.906802 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-05-23 00:58:27.906821 | orchestrator | Friday 23 May 2025 00:56:19 +0000 (0:00:00.628) 0:00:04.012 ************ 2025-05-23 00:58:27.906838 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:58:27.906855 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:58:27.906884 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:58:27.906903 | orchestrator | 2025-05-23 00:58:27.906924 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-05-23 00:58:27.906941 | orchestrator | Friday 23 May 2025 00:56:20 +0000 (0:00:00.335) 0:00:04.348 ************ 2025-05-23 00:58:27.906957 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:58:27.906968 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:58:27.906979 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:58:27.906989 | orchestrator | 2025-05-23 00:58:27.907005 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-05-23 00:58:27.907024 | orchestrator | Friday 23 May 2025 00:56:21 +0000 (0:00:00.936) 0:00:05.284 ************ 2025-05-23 00:58:27.907041 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:58:27.907057 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:58:27.907075 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:58:27.907092 | orchestrator | 2025-05-23 00:58:27.907109 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-05-23 00:58:27.907127 | orchestrator | Friday 23 May 2025 00:56:21 +0000 (0:00:00.294) 0:00:05.578 ************ 2025-05-23 00:58:27.907146 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:58:27.907166 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:58:27.907184 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:58:27.907198 | orchestrator | 2025-05-23 00:58:27.907209 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-05-23 00:58:27.907220 | orchestrator | Friday 23 May 2025 00:56:21 +0000 (0:00:00.317) 0:00:05.896 ************ 2025-05-23 00:58:27.907231 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:58:27.907242 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:58:27.907282 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:58:27.907293 | orchestrator | 2025-05-23 00:58:27.907304 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-05-23 00:58:27.907315 | orchestrator | Friday 23 May 2025 00:56:22 +0000 (0:00:00.342) 0:00:06.239 ************ 2025-05-23 00:58:27.907326 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:58:27.907338 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:58:27.907349 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:58:27.907359 | orchestrator | 2025-05-23 00:58:27.907370 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-05-23 00:58:27.907381 | orchestrator | Friday 23 May 2025 00:56:22 +0000 (0:00:00.578) 0:00:06.817 ************ 2025-05-23 00:58:27.907392 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:58:27.907414 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:58:27.907424 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:58:27.907435 | orchestrator | 2025-05-23 00:58:27.907446 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-23 00:58:27.907456 | orchestrator | Friday 23 May 2025 00:56:22 +0000 (0:00:00.340) 0:00:07.158 ************ 2025-05-23 00:58:27.907467 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-23 00:58:27.907477 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-23 00:58:27.907488 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-23 00:58:27.907498 | orchestrator | 2025-05-23 00:58:27.907509 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-05-23 00:58:27.907520 | orchestrator | Friday 23 May 2025 00:56:23 +0000 (0:00:00.727) 0:00:07.886 ************ 2025-05-23 00:58:27.907530 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:58:27.907541 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:58:27.907551 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:58:27.907562 | orchestrator | 2025-05-23 00:58:27.907574 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-05-23 00:58:27.907593 | orchestrator | Friday 23 May 2025 00:56:24 +0000 (0:00:00.478) 0:00:08.364 ************ 2025-05-23 00:58:27.907631 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-23 00:58:27.907658 | orchestrator | changed: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-23 00:58:27.907680 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-23 00:58:27.907698 | orchestrator | 2025-05-23 00:58:27.907718 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-05-23 00:58:27.907735 | orchestrator | Friday 23 May 2025 00:56:26 +0000 (0:00:02.460) 0:00:10.825 ************ 2025-05-23 00:58:27.907751 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-23 00:58:27.907762 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-23 00:58:27.907781 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-23 00:58:27.907793 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:58:27.907803 | orchestrator | 2025-05-23 00:58:27.907814 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-05-23 00:58:27.907824 | orchestrator | Friday 23 May 2025 00:56:27 +0000 (0:00:00.453) 0:00:11.278 ************ 2025-05-23 00:58:27.907837 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-23 00:58:27.907852 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-23 00:58:27.907863 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-23 00:58:27.907874 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:58:27.907884 | orchestrator | 2025-05-23 00:58:27.907895 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-05-23 00:58:27.907906 | orchestrator | Friday 23 May 2025 00:56:27 +0000 (0:00:00.653) 0:00:11.931 ************ 2025-05-23 00:58:27.907919 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-23 00:58:27.907942 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-23 00:58:27.907953 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-23 00:58:27.907965 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:58:27.907976 | orchestrator | 2025-05-23 00:58:27.907987 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-05-23 00:58:27.907997 | orchestrator | Friday 23 May 2025 00:56:27 +0000 (0:00:00.164) 0:00:12.096 ************ 2025-05-23 00:58:27.908011 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': 'a5f5aa308057', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-23 00:56:25.104366', 'end': '2025-05-23 00:56:25.141001', 'delta': '0:00:00.036635', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a5f5aa308057'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-05-23 00:58:27.908044 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '180478cf69c8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-23 00:56:25.684946', 'end': '2025-05-23 00:56:25.728336', 'delta': '0:00:00.043390', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['180478cf69c8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-05-23 00:58:27.908062 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '269b56f838c0', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-23 00:56:26.256995', 'end': '2025-05-23 00:56:26.295001', 'delta': '0:00:00.038006', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['269b56f838c0'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-05-23 00:58:27.908074 | orchestrator | 2025-05-23 00:58:27.908085 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-05-23 00:58:27.908096 | orchestrator | Friday 23 May 2025 00:56:28 +0000 (0:00:00.230) 0:00:12.326 ************ 2025-05-23 00:58:27.908107 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:58:27.908118 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:58:27.908128 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:58:27.908139 | orchestrator | 2025-05-23 00:58:27.908156 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-05-23 00:58:27.908167 | orchestrator | Friday 23 May 2025 00:56:28 +0000 (0:00:00.530) 0:00:12.857 ************ 2025-05-23 00:58:27.908178 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-05-23 00:58:27.908188 | orchestrator | 2025-05-23 00:58:27.908199 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-05-23 00:58:27.908210 | orchestrator | Friday 23 May 2025 00:56:30 +0000 (0:00:01.423) 0:00:14.280 ************ 2025-05-23 00:58:27.908221 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:58:27.908231 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:58:27.908242 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:58:27.908280 | orchestrator | 2025-05-23 00:58:27.908292 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-05-23 00:58:27.908303 | orchestrator | Friday 23 May 2025 00:56:30 +0000 (0:00:00.479) 0:00:14.760 ************ 2025-05-23 00:58:27.908313 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:58:27.908324 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:58:27.908335 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:58:27.908346 | orchestrator | 2025-05-23 00:58:27.908357 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-23 00:58:27.908367 | orchestrator | Friday 23 May 2025 00:56:30 +0000 (0:00:00.428) 0:00:15.189 ************ 2025-05-23 00:58:27.908378 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:58:27.908389 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:58:27.908400 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:58:27.908411 | orchestrator | 2025-05-23 00:58:27.908421 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-05-23 00:58:27.908432 | orchestrator | Friday 23 May 2025 00:56:31 +0000 (0:00:00.323) 0:00:15.512 ************ 2025-05-23 00:58:27.908443 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:58:27.908454 | orchestrator | 2025-05-23 00:58:27.908465 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-05-23 00:58:27.908476 | orchestrator | Friday 23 May 2025 00:56:31 +0000 (0:00:00.127) 0:00:15.640 ************ 2025-05-23 00:58:27.908486 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:58:27.908497 | orchestrator | 2025-05-23 00:58:27.908508 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-23 00:58:27.908519 | orchestrator | Friday 23 May 2025 00:56:31 +0000 (0:00:00.221) 0:00:15.861 ************ 2025-05-23 00:58:27.908529 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:58:27.908540 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:58:27.908551 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:58:27.908562 | orchestrator | 2025-05-23 00:58:27.908573 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-05-23 00:58:27.908584 | orchestrator | Friday 23 May 2025 00:56:32 +0000 (0:00:00.548) 0:00:16.410 ************ 2025-05-23 00:58:27.908594 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:58:27.908605 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:58:27.908616 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:58:27.908627 | orchestrator | 2025-05-23 00:58:27.908638 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-05-23 00:58:27.908648 | orchestrator | Friday 23 May 2025 00:56:32 +0000 (0:00:00.416) 0:00:16.827 ************ 2025-05-23 00:58:27.908659 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:58:27.908670 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:58:27.908681 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:58:27.908692 | orchestrator | 2025-05-23 00:58:27.908702 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-05-23 00:58:27.908718 | orchestrator | Friday 23 May 2025 00:56:33 +0000 (0:00:00.411) 0:00:17.238 ************ 2025-05-23 00:58:27.908737 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:58:27.908755 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:58:27.908783 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:58:27.908812 | orchestrator | 2025-05-23 00:58:27.908831 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-05-23 00:58:27.908851 | orchestrator | Friday 23 May 2025 00:56:33 +0000 (0:00:00.363) 0:00:17.601 ************ 2025-05-23 00:58:27.908870 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:58:27.908889 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:58:27.908906 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:58:27.908926 | orchestrator | 2025-05-23 00:58:27.908944 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-05-23 00:58:27.908960 | orchestrator | Friday 23 May 2025 00:56:34 +0000 (0:00:00.622) 0:00:18.224 ************ 2025-05-23 00:58:27.908972 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:58:27.908982 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:58:27.908993 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:58:27.909004 | orchestrator | 2025-05-23 00:58:27.909021 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-23 00:58:27.909032 | orchestrator | Friday 23 May 2025 00:56:34 +0000 (0:00:00.295) 0:00:18.520 ************ 2025-05-23 00:58:27.909043 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:58:27.909053 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:58:27.909064 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:58:27.909075 | orchestrator | 2025-05-23 00:58:27.909086 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-05-23 00:58:27.909096 | orchestrator | Friday 23 May 2025 00:56:34 +0000 (0:00:00.321) 0:00:18.842 ************ 2025-05-23 00:58:27.909108 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--17b95678--9240--5166--938b--e89fe6559568-osd--block--17b95678--9240--5166--938b--e89fe6559568', 'dm-uuid-LVM-ZZwSWRZHA2e2gvBfEGalnZuCgncqo9stGwibsev2qcB1RIlttmtzeRTYn2s1YPlP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-23 00:58:27.909122 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8fe28d0c--4762--50fd--9b7b--6f1bb47ff5c0-osd--block--8fe28d0c--4762--50fd--9b7b--6f1bb47ff5c0', 'dm-uuid-LVM-DNrsdmT8wqsoRqzmZbxiCLodm4pU7RBZWG6tPFDH6wj2dDw2rDAo1rNBYpypshxY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-23 00:58:27.909133 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:58:27.909145 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:58:27.909157 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:58:27.909187 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--125adf16--eac9--5ada--96e7--bcd4f30a545d-osd--block--125adf16--eac9--5ada--96e7--bcd4f30a545d', 'dm-uuid-LVM-BCxmKIXJutF9vlKp4BKB1Q8l1VtN5qBeelKjY7Rw2QjAo5NEVnruVcRqGRclAHko'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-23 00:58:27.909199 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:58:27.909215 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8bf3a31b--2d76--5988--bbd2--6800630d4c9a-osd--block--8bf3a31b--2d76--5988--bbd2--6800630d4c9a', 'dm-uuid-LVM-s1iYMsXZDLp7O33pcgRsDdeeRDpLn8e0FocQylEEkBSTHxQh86afwfAyqdvOeV3u'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-23 00:58:27.909227 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:58:27.909238 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:58:27.909317 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:58:27.909340 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:58:27.909352 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:58:27.909375 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:58:27.909387 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:58:27.909408 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:58:27.909428 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e91133d1-5a4c-4c6b-aae9-a3102c4d2118', 'scsi-SQEMU_QEMU_HARDDISK_e91133d1-5a4c-4c6b-aae9-a3102c4d2118'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e91133d1-5a4c-4c6b-aae9-a3102c4d2118-part1', 'scsi-SQEMU_QEMU_HARDDISK_e91133d1-5a4c-4c6b-aae9-a3102c4d2118-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e91133d1-5a4c-4c6b-aae9-a3102c4d2118-part14', 'scsi-SQEMU_QEMU_HARDDISK_e91133d1-5a4c-4c6b-aae9-a3102c4d2118-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e91133d1-5a4c-4c6b-aae9-a3102c4d2118-part15', 'scsi-SQEMU_QEMU_HARDDISK_e91133d1-5a4c-4c6b-aae9-a3102c4d2118-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e91133d1-5a4c-4c6b-aae9-a3102c4d2118-part16', 'scsi-SQEMU_QEMU_HARDDISK_e91133d1-5a4c-4c6b-aae9-a3102c4d2118-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-23 00:58:27.909443 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:58:27.909456 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--17b95678--9240--5166--938b--e89fe6559568-osd--block--17b95678--9240--5166--938b--e89fe6559568'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3GL2ip-Wdk8-SvYq-3j44-2Nl3-OHgp-XXXVcL', 'scsi-0QEMU_QEMU_HARDDISK_3c0d7b27-8ebd-4816-b389-8c3a005395e5', 'scsi-SQEMU_QEMU_HARDDISK_3c0d7b27-8ebd-4816-b389-8c3a005395e5'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-23 00:58:27.909483 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8fe28d0c--4762--50fd--9b7b--6f1bb47ff5c0-osd--block--8fe28d0c--4762--50fd--9b7b--6f1bb47ff5c0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-60aTsd-C4Zb-g0m4-tLi0-tj0N-by2m-9zrULV', 'scsi-0QEMU_QEMU_HARDDISK_eb878625-a80c-49f3-a757-e0a303c4dd75', 'scsi-SQEMU_QEMU_HARDDISK_eb878625-a80c-49f3-a757-e0a303c4dd75'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-23 00:58:27.909506 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:58:27.909518 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:58:27.909529 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_252b3cc1-c875-426d-9475-c1c0edf2ac3c', 'scsi-SQEMU_QEMU_HARDDISK_252b3cc1-c875-426d-9475-c1c0edf2ac3c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-23 00:58:27.909542 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-23-00-02-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-23 00:58:27.909553 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:58:27.909570 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:58:27.909596 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab21c0a7-19ba-47fa-9bfa-a97fbae45af4', 'scsi-SQEMU_QEMU_HARDDISK_ab21c0a7-19ba-47fa-9bfa-a97fbae45af4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab21c0a7-19ba-47fa-9bfa-a97fbae45af4-part1', 'scsi-SQEMU_QEMU_HARDDISK_ab21c0a7-19ba-47fa-9bfa-a97fbae45af4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab21c0a7-19ba-47fa-9bfa-a97fbae45af4-part14', 'scsi-SQEMU_QEMU_HARDDISK_ab21c0a7-19ba-47fa-9bfa-a97fbae45af4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab21c0a7-19ba-47fa-9bfa-a97fbae45af4-part15', 'scsi-SQEMU_QEMU_HARDDISK_ab21c0a7-19ba-47fa-9bfa-a97fbae45af4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab21c0a7-19ba-47fa-9bfa-a97fbae45af4-part16', 'scsi-SQEMU_QEMU_HARDDISK_ab21c0a7-19ba-47fa-9bfa-a97fbae45af4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-23 00:58:27.909612 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1c1d7620--81eb--54f7--8ffb--e9df7a8995e0-osd--block--1c1d7620--81eb--54f7--8ffb--e9df7a8995e0', 'dm-uuid-LVM-tRVA94W9EmSLkJeczqHMXIcyood4OFdpZpq2LYzfUgHH95d0i0dLrgDPWXuxNWTR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-23 00:58:27.909624 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--125adf16--eac9--5ada--96e7--bcd4f30a545d-osd--block--125adf16--eac9--5ada--96e7--bcd4f30a545d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-g0rkl1-FbNd-ZPh5-sRAc-RVz3-orO8-96jH9g', 'scsi-0QEMU_QEMU_HARDDISK_2fc59eae-0e0c-4c3b-84f8-905b4655c6b7', 'scsi-SQEMU_QEMU_HARDDISK_2fc59eae-0e0c-4c3b-84f8-905b4655c6b7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-23 00:58:27.909636 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dafe69f8--630b--5486--ba76--590e0b4d1820-osd--block--dafe69f8--630b--5486--ba76--590e0b4d1820', 'dm-uuid-LVM-ODjD6BqtmT1AFJHA3ZBheIhvhE3MvAXhtM0cOUyZ7GOIdZ7sJfg38u5r01vOEjsd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-23 00:58:27.909654 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8bf3a31b--2d76--5988--bbd2--6800630d4c9a-osd--block--8bf3a31b--2d76--5988--bbd2--6800630d4c9a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yQ0zEK-kQ0w-r2eu-fsPZ-nI4k-HPl8-obn7zz', 'scsi-0QEMU_QEMU_HARDDISK_2ac02f21-3ef0-4f70-9ec3-b7448efc3652', 'scsi-SQEMU_QEMU_HARDDISK_2ac02f21-3ef0-4f70-9ec3-b7448efc3652'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-23 00:58:27.909672 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29f848a2-d495-4783-815a-7e69d4da9d2d', 'scsi-SQEMU_QEMU_HARDDISK_29f848a2-d495-4783-815a-7e69d4da9d2d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-23 00:58:27.909684 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:58:27.909700 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-23-00-02-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-23 00:58:27.909712 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:58:27.909723 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:58:27.909734 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:58:27.909746 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:58:27.909763 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:58:27.909775 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:58:27.909786 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:58:27.909803 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:58:27.909820 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c8efe3c1-6307-4e01-8bfc-afd4fa6a2572', 'scsi-SQEMU_QEMU_HARDDISK_c8efe3c1-6307-4e01-8bfc-afd4fa6a2572'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c8efe3c1-6307-4e01-8bfc-afd4fa6a2572-part1', 'scsi-SQEMU_QEMU_HARDDISK_c8efe3c1-6307-4e01-8bfc-afd4fa6a2572-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c8efe3c1-6307-4e01-8bfc-afd4fa6a2572-part14', 'scsi-SQEMU_QEMU_HARDDISK_c8efe3c1-6307-4e01-8bfc-afd4fa6a2572-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c8efe3c1-6307-4e01-8bfc-afd4fa6a2572-part15', 'scsi-SQEMU_QEMU_HARDDISK_c8efe3c1-6307-4e01-8bfc-afd4fa6a2572-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c8efe3c1-6307-4e01-8bfc-afd4fa6a2572-part16', 'scsi-SQEMU_QEMU_HARDDISK_c8efe3c1-6307-4e01-8bfc-afd4fa6a2572-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-23 00:58:27.909833 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--1c1d7620--81eb--54f7--8ffb--e9df7a8995e0-osd--block--1c1d7620--81eb--54f7--8ffb--e9df7a8995e0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0A2T45-X40H-s0d6-nUxd-bzRE-jAFi-fH6wa8', 'scsi-0QEMU_QEMU_HARDDISK_18473d69-2fd0-4937-9240-f5fad34c2ed7', 'scsi-SQEMU_QEMU_HARDDISK_18473d69-2fd0-4937-9240-f5fad34c2ed7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-23 00:58:27.909852 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--dafe69f8--630b--5486--ba76--590e0b4d1820-osd--block--dafe69f8--630b--5486--ba76--590e0b4d1820'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-nE4PFT-HwJj-uK0a-LG5n-nYOl-mJsJ-OKMseI', 'scsi-0QEMU_QEMU_HARDDISK_5f24398e-55ab-4e45-a360-e924ed2b4127', 'scsi-SQEMU_QEMU_HARDDISK_5f24398e-55ab-4e45-a360-e924ed2b4127'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-23 00:58:27.909901 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_329d29a6-e648-44c1-9803-5cc5abc56db6', 'scsi-SQEMU_QEMU_HARDDISK_329d29a6-e648-44c1-9803-5cc5abc56db6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-23 00:58:27.909918 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-23-00-02-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-23 00:58:27.909930 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:58:27.909941 | orchestrator | 2025-05-23 00:58:27.909952 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-05-23 00:58:27.909963 | orchestrator | Friday 23 May 2025 00:56:35 +0000 (0:00:00.536) 0:00:19.378 ************ 2025-05-23 00:58:27.909974 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-05-23 00:58:27.909985 | orchestrator | 2025-05-23 00:58:27.909995 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-05-23 00:58:27.910006 | orchestrator | Friday 23 May 2025 00:56:36 +0000 (0:00:01.384) 0:00:20.762 ************ 2025-05-23 00:58:27.910067 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:58:27.910079 | orchestrator | 2025-05-23 00:58:27.910090 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-05-23 00:58:27.910101 | orchestrator | Friday 23 May 2025 00:56:36 +0000 (0:00:00.129) 0:00:20.892 ************ 2025-05-23 00:58:27.910111 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:58:27.910122 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:58:27.910133 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:58:27.910144 | orchestrator | 2025-05-23 00:58:27.910154 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-05-23 00:58:27.910165 | orchestrator | Friday 23 May 2025 00:56:37 +0000 (0:00:00.387) 0:00:21.280 ************ 2025-05-23 00:58:27.910188 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:58:27.910199 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:58:27.910209 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:58:27.910220 | orchestrator | 2025-05-23 00:58:27.910231 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-05-23 00:58:27.910242 | orchestrator | Friday 23 May 2025 00:56:37 +0000 (0:00:00.665) 0:00:21.946 ************ 2025-05-23 00:58:27.910277 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:58:27.910288 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:58:27.910299 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:58:27.910309 | orchestrator | 2025-05-23 00:58:27.910320 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-23 00:58:27.910331 | orchestrator | Friday 23 May 2025 00:56:38 +0000 (0:00:00.257) 0:00:22.204 ************ 2025-05-23 00:58:27.910342 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:58:27.910352 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:58:27.910363 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:58:27.910374 | orchestrator | 2025-05-23 00:58:27.910385 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-23 00:58:27.910395 | orchestrator | Friday 23 May 2025 00:56:39 +0000 (0:00:01.957) 0:00:24.162 ************ 2025-05-23 00:58:27.910406 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:58:27.910417 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:58:27.910428 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:58:27.910439 | orchestrator | 2025-05-23 00:58:27.910449 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-23 00:58:27.910460 | orchestrator | Friday 23 May 2025 00:56:40 +0000 (0:00:00.317) 0:00:24.479 ************ 2025-05-23 00:58:27.910471 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:58:27.910481 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:58:27.910492 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:58:27.910503 | orchestrator | 2025-05-23 00:58:27.910513 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-23 00:58:27.910524 | orchestrator | Friday 23 May 2025 00:56:40 +0000 (0:00:00.620) 0:00:25.100 ************ 2025-05-23 00:58:27.910535 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:58:27.910545 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:58:27.910556 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:58:27.910567 | orchestrator | 2025-05-23 00:58:27.910577 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-05-23 00:58:27.910588 | orchestrator | Friday 23 May 2025 00:56:41 +0000 (0:00:00.528) 0:00:25.628 ************ 2025-05-23 00:58:27.910599 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-23 00:58:27.910610 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-23 00:58:27.910621 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-23 00:58:27.910631 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-23 00:58:27.910642 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-23 00:58:27.910653 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-23 00:58:27.910664 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:58:27.910674 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-23 00:58:27.910685 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-23 00:58:27.910696 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:58:27.910707 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-23 00:58:27.910718 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:58:27.910729 | orchestrator | 2025-05-23 00:58:27.910740 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-05-23 00:58:27.910759 | orchestrator | Friday 23 May 2025 00:56:42 +0000 (0:00:00.887) 0:00:26.516 ************ 2025-05-23 00:58:27.910770 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-23 00:58:27.910788 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-23 00:58:27.910799 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-23 00:58:27.910809 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-23 00:58:27.910820 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-23 00:58:27.910831 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:58:27.910841 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-23 00:58:27.910852 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-23 00:58:27.910867 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-23 00:58:27.910878 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:58:27.910889 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-23 00:58:27.910900 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:58:27.910913 | orchestrator | 2025-05-23 00:58:27.910932 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-05-23 00:58:27.910950 | orchestrator | Friday 23 May 2025 00:56:43 +0000 (0:00:00.707) 0:00:27.223 ************ 2025-05-23 00:58:27.910968 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-05-23 00:58:27.910985 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-05-23 00:58:27.911003 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-05-23 00:58:27.911021 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-05-23 00:58:27.911040 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-05-23 00:58:27.911056 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-05-23 00:58:27.911066 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-05-23 00:58:27.911077 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-05-23 00:58:27.911088 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-05-23 00:58:27.911099 | orchestrator | 2025-05-23 00:58:27.911109 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-05-23 00:58:27.911120 | orchestrator | Friday 23 May 2025 00:56:44 +0000 (0:00:01.834) 0:00:29.057 ************ 2025-05-23 00:58:27.911131 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-23 00:58:27.911141 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-23 00:58:27.911152 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-23 00:58:27.911163 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:58:27.911173 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-23 00:58:27.911184 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-23 00:58:27.911194 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-23 00:58:27.911205 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-23 00:58:27.911216 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:58:27.911226 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-23 00:58:27.911237 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-23 00:58:27.911268 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:58:27.911285 | orchestrator | 2025-05-23 00:58:27.911296 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-05-23 00:58:27.911307 | orchestrator | Friday 23 May 2025 00:56:45 +0000 (0:00:00.540) 0:00:29.597 ************ 2025-05-23 00:58:27.911318 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-23 00:58:27.911328 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-23 00:58:27.911339 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-23 00:58:27.911350 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-23 00:58:27.911360 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-23 00:58:27.911371 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-23 00:58:27.911390 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:58:27.911401 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:58:27.911412 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-23 00:58:27.911423 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-23 00:58:27.911433 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-23 00:58:27.911444 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:58:27.911455 | orchestrator | 2025-05-23 00:58:27.911466 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-05-23 00:58:27.911477 | orchestrator | Friday 23 May 2025 00:56:45 +0000 (0:00:00.371) 0:00:29.968 ************ 2025-05-23 00:58:27.911488 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-23 00:58:27.911499 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-23 00:58:27.911510 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-23 00:58:27.911521 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:58:27.911532 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-23 00:58:27.911543 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-23 00:58:27.911553 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-23 00:58:27.911564 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-23 00:58:27.911575 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:58:27.911594 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-23 00:58:27.911605 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-23 00:58:27.911616 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:58:27.911627 | orchestrator | 2025-05-23 00:58:27.911638 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-05-23 00:58:27.911649 | orchestrator | Friday 23 May 2025 00:56:46 +0000 (0:00:00.406) 0:00:30.375 ************ 2025-05-23 00:58:27.911660 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 00:58:27.911670 | orchestrator | 2025-05-23 00:58:27.911688 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-23 00:58:27.911699 | orchestrator | Friday 23 May 2025 00:56:46 +0000 (0:00:00.662) 0:00:31.037 ************ 2025-05-23 00:58:27.911710 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:58:27.911721 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:58:27.911732 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:58:27.911743 | orchestrator | 2025-05-23 00:58:27.911754 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-23 00:58:27.911764 | orchestrator | Friday 23 May 2025 00:56:47 +0000 (0:00:00.276) 0:00:31.314 ************ 2025-05-23 00:58:27.911775 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:58:27.911786 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:58:27.911797 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:58:27.911808 | orchestrator | 2025-05-23 00:58:27.911819 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-23 00:58:27.911830 | orchestrator | Friday 23 May 2025 00:56:47 +0000 (0:00:00.277) 0:00:31.592 ************ 2025-05-23 00:58:27.911840 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:58:27.911851 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:58:27.911862 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:58:27.911872 | orchestrator | 2025-05-23 00:58:27.911883 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-23 00:58:27.911894 | orchestrator | Friday 23 May 2025 00:56:47 +0000 (0:00:00.345) 0:00:31.937 ************ 2025-05-23 00:58:27.911911 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:58:27.911922 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:58:27.911933 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:58:27.911943 | orchestrator | 2025-05-23 00:58:27.911954 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-23 00:58:27.911965 | orchestrator | Friday 23 May 2025 00:56:48 +0000 (0:00:00.541) 0:00:32.479 ************ 2025-05-23 00:58:27.911976 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-23 00:58:27.911987 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-23 00:58:27.911998 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-23 00:58:27.912009 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:58:27.912019 | orchestrator | 2025-05-23 00:58:27.912030 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-23 00:58:27.912041 | orchestrator | Friday 23 May 2025 00:56:48 +0000 (0:00:00.395) 0:00:32.874 ************ 2025-05-23 00:58:27.912058 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-23 00:58:27.912077 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-23 00:58:27.912095 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-23 00:58:27.912112 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:58:27.912130 | orchestrator | 2025-05-23 00:58:27.912147 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-23 00:58:27.912165 | orchestrator | Friday 23 May 2025 00:56:49 +0000 (0:00:00.380) 0:00:33.255 ************ 2025-05-23 00:58:27.912184 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-23 00:58:27.912203 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-23 00:58:27.912223 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-23 00:58:27.912241 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:58:27.912283 | orchestrator | 2025-05-23 00:58:27.912303 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-23 00:58:27.912315 | orchestrator | Friday 23 May 2025 00:56:49 +0000 (0:00:00.429) 0:00:33.684 ************ 2025-05-23 00:58:27.912326 | orchestrator | ok: [testbed-node-3] 2025-05-23 00:58:27.912337 | orchestrator | ok: [testbed-node-4] 2025-05-23 00:58:27.912348 | orchestrator | ok: [testbed-node-5] 2025-05-23 00:58:27.912358 | orchestrator | 2025-05-23 00:58:27.912369 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-23 00:58:27.912380 | orchestrator | Friday 23 May 2025 00:56:49 +0000 (0:00:00.349) 0:00:34.034 ************ 2025-05-23 00:58:27.912390 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-23 00:58:27.912401 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-23 00:58:27.912412 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-23 00:58:27.912423 | orchestrator | 2025-05-23 00:58:27.912433 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-23 00:58:27.912445 | orchestrator | Friday 23 May 2025 00:56:50 +0000 (0:00:00.947) 0:00:34.982 ************ 2025-05-23 00:58:27.912456 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:58:27.912467 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:58:27.912478 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:58:27.912489 | orchestrator | 2025-05-23 00:58:27.912500 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-23 00:58:27.912536 | orchestrator | Friday 23 May 2025 00:56:51 +0000 (0:00:00.269) 0:00:35.251 ************ 2025-05-23 00:58:27.912547 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:58:27.912558 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:58:27.912570 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:58:27.912589 | orchestrator | 2025-05-23 00:58:27.912625 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-23 00:58:27.912655 | orchestrator | Friday 23 May 2025 00:56:51 +0000 (0:00:00.306) 0:00:35.558 ************ 2025-05-23 00:58:27.912697 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-23 00:58:27.912709 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-23 00:58:27.912720 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:58:27.912731 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:58:27.912741 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-23 00:58:27.912752 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:58:27.912763 | orchestrator | 2025-05-23 00:58:27.912774 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-23 00:58:27.912787 | orchestrator | Friday 23 May 2025 00:56:51 +0000 (0:00:00.413) 0:00:35.971 ************ 2025-05-23 00:58:27.912809 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-23 00:58:27.912838 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:58:27.912850 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-23 00:58:27.912861 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:58:27.912888 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-23 00:58:27.912899 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:58:27.912910 | orchestrator | 2025-05-23 00:58:27.912921 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-23 00:58:27.912932 | orchestrator | Friday 23 May 2025 00:56:52 +0000 (0:00:00.447) 0:00:36.418 ************ 2025-05-23 00:58:27.912943 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-23 00:58:27.912965 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-23 00:58:27.912977 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-23 00:58:27.912987 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-23 00:58:27.912998 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-23 00:58:27.913009 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:58:27.913019 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-23 00:58:27.913030 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:58:27.913041 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-23 00:58:27.913051 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-23 00:58:27.913062 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-23 00:58:27.913073 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:58:27.913083 | orchestrator | 2025-05-23 00:58:27.913094 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-05-23 00:58:27.913105 | orchestrator | Friday 23 May 2025 00:56:52 +0000 (0:00:00.619) 0:00:37.038 ************ 2025-05-23 00:58:27.913116 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:58:27.913126 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:58:27.913137 | orchestrator | skipping: [testbed-node-5] 2025-05-23 00:58:27.913148 | orchestrator | 2025-05-23 00:58:27.913159 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-05-23 00:58:27.913169 | orchestrator | Friday 23 May 2025 00:56:53 +0000 (0:00:00.274) 0:00:37.312 ************ 2025-05-23 00:58:27.913180 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-23 00:58:27.913191 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-23 00:58:27.913202 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-23 00:58:27.913213 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-05-23 00:58:27.913224 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-23 00:58:27.913234 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-23 00:58:27.913245 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-23 00:58:27.913330 | orchestrator | 2025-05-23 00:58:27.913350 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-05-23 00:58:27.913370 | orchestrator | Friday 23 May 2025 00:56:54 +0000 (0:00:00.961) 0:00:38.274 ************ 2025-05-23 00:58:27.913384 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-23 00:58:27.913394 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-23 00:58:27.913405 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-23 00:58:27.913416 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-05-23 00:58:27.913425 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-23 00:58:27.913435 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-23 00:58:27.913445 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-23 00:58:27.913454 | orchestrator | 2025-05-23 00:58:27.913464 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-05-23 00:58:27.913473 | orchestrator | Friday 23 May 2025 00:56:55 +0000 (0:00:01.665) 0:00:39.939 ************ 2025-05-23 00:58:27.913483 | orchestrator | skipping: [testbed-node-3] 2025-05-23 00:58:27.913492 | orchestrator | skipping: [testbed-node-4] 2025-05-23 00:58:27.913502 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-05-23 00:58:27.913511 | orchestrator | 2025-05-23 00:58:27.913521 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-05-23 00:58:27.913537 | orchestrator | Friday 23 May 2025 00:56:56 +0000 (0:00:00.458) 0:00:40.398 ************ 2025-05-23 00:58:27.913549 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-23 00:58:27.913567 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-23 00:58:27.913577 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-23 00:58:27.913591 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-23 00:58:27.913605 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-23 00:58:27.913615 | orchestrator | 2025-05-23 00:58:27.913625 | orchestrator | TASK [generate keys] *********************************************************** 2025-05-23 00:58:27.913634 | orchestrator | Friday 23 May 2025 00:57:39 +0000 (0:00:42.871) 0:01:23.270 ************ 2025-05-23 00:58:27.913644 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-23 00:58:27.913654 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-23 00:58:27.913663 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-23 00:58:27.913679 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-23 00:58:27.913689 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-23 00:58:27.913699 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-23 00:58:27.913708 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-05-23 00:58:27.913718 | orchestrator | 2025-05-23 00:58:27.913727 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-05-23 00:58:27.913737 | orchestrator | Friday 23 May 2025 00:57:59 +0000 (0:00:19.957) 0:01:43.227 ************ 2025-05-23 00:58:27.913746 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-23 00:58:27.913756 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-23 00:58:27.913770 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-23 00:58:27.913787 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-23 00:58:27.913802 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-23 00:58:27.913812 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-23 00:58:27.913822 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-23 00:58:27.913831 | orchestrator | 2025-05-23 00:58:27.913841 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-05-23 00:58:27.913851 | orchestrator | Friday 23 May 2025 00:58:08 +0000 (0:00:09.615) 0:01:52.843 ************ 2025-05-23 00:58:27.913860 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-23 00:58:27.913870 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-23 00:58:27.913879 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-23 00:58:27.913889 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-23 00:58:27.913898 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-23 00:58:27.913908 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-23 00:58:27.913917 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-23 00:58:27.913927 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-23 00:58:27.913936 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-23 00:58:27.913946 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-23 00:58:27.913956 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-23 00:58:27.913971 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-23 00:58:27.913982 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-23 00:58:27.913991 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-23 00:58:27.914001 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-23 00:58:27.914010 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-23 00:58:27.914050 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-23 00:58:27.914060 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-23 00:58:27.914074 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-05-23 00:58:27.914084 | orchestrator | 2025-05-23 00:58:27.914094 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 00:58:27.914103 | orchestrator | testbed-node-3 : ok=30  changed=2  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-05-23 00:58:27.914121 | orchestrator | testbed-node-4 : ok=20  changed=0 unreachable=0 failed=0 skipped=30  rescued=0 ignored=0 2025-05-23 00:58:27.914131 | orchestrator | testbed-node-5 : ok=25  changed=3  unreachable=0 failed=0 skipped=29  rescued=0 ignored=0 2025-05-23 00:58:27.914141 | orchestrator | 2025-05-23 00:58:27.914150 | orchestrator | 2025-05-23 00:58:27.914160 | orchestrator | 2025-05-23 00:58:27.914170 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-23 00:58:27.914179 | orchestrator | Friday 23 May 2025 00:58:26 +0000 (0:00:17.845) 0:02:10.688 ************ 2025-05-23 00:58:27.914189 | orchestrator | =============================================================================== 2025-05-23 00:58:27.914198 | orchestrator | create openstack pool(s) ----------------------------------------------- 42.87s 2025-05-23 00:58:27.914208 | orchestrator | generate keys ---------------------------------------------------------- 19.96s 2025-05-23 00:58:27.914217 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.85s 2025-05-23 00:58:27.914227 | orchestrator | get keys from monitors -------------------------------------------------- 9.62s 2025-05-23 00:58:27.914237 | orchestrator | ceph-facts : find a running mon container ------------------------------- 2.46s 2025-05-23 00:58:27.914246 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 1.96s 2025-05-23 00:58:27.914278 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 1.83s 2025-05-23 00:58:27.914288 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 1.67s 2025-05-23 00:58:27.914297 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 1.42s 2025-05-23 00:58:27.914307 | orchestrator | ceph-facts : get ceph current status ------------------------------------ 1.38s 2025-05-23 00:58:27.914316 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 0.96s 2025-05-23 00:58:27.914326 | orchestrator | ceph-facts : set_fact rgw_instances without rgw multisite --------------- 0.95s 2025-05-23 00:58:27.914335 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 0.94s 2025-05-23 00:58:27.914345 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 0.92s 2025-05-23 00:58:27.914355 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 0.89s 2025-05-23 00:58:27.914364 | orchestrator | ceph-facts : include facts.yml ------------------------------------------ 0.80s 2025-05-23 00:58:27.914374 | orchestrator | ceph-facts : set_fact monitor_name ansible_facts['hostname'] ------------ 0.73s 2025-05-23 00:58:27.914383 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6 --- 0.71s 2025-05-23 00:58:27.914393 | orchestrator | ceph-facts : check if the ceph conf exists ------------------------------ 0.67s 2025-05-23 00:58:27.914402 | orchestrator | ceph-facts : import_tasks set_radosgw_address.yml ----------------------- 0.66s 2025-05-23 00:58:27.914412 | orchestrator | 2025-05-23 00:58:27 | INFO  | Task ba588c27-023a-4643-93d0-6b669301227f is in state SUCCESS 2025-05-23 00:58:27.914434 | orchestrator | 2025-05-23 00:58:27 | INFO  | Task 0cd617fa-b1f4-4bd2-8627-6ab0fa7b48bb is in state STARTED 2025-05-23 00:58:27.914444 | orchestrator | 2025-05-23 00:58:27 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:58:30.966992 | orchestrator | 2025-05-23 00:58:30 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:58:30.968937 | orchestrator | 2025-05-23 00:58:30 | INFO  | Task c58bbfa3-78c1-4417-9f24-8ea71ef98d68 is in state STARTED 2025-05-23 00:58:30.970306 | orchestrator | 2025-05-23 00:58:30 | INFO  | Task 0cd617fa-b1f4-4bd2-8627-6ab0fa7b48bb is in state STARTED 2025-05-23 00:58:30.970345 | orchestrator | 2025-05-23 00:58:30 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:58:34.018990 | orchestrator | 2025-05-23 00:58:34 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:58:34.020076 | orchestrator | 2025-05-23 00:58:34 | INFO  | Task c58bbfa3-78c1-4417-9f24-8ea71ef98d68 is in state STARTED 2025-05-23 00:58:34.022533 | orchestrator | 2025-05-23 00:58:34 | INFO  | Task 0cd617fa-b1f4-4bd2-8627-6ab0fa7b48bb is in state STARTED 2025-05-23 00:58:34.023639 | orchestrator | 2025-05-23 00:58:34 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:58:37.090792 | orchestrator | 2025-05-23 00:58:37 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:58:37.092738 | orchestrator | 2025-05-23 00:58:37 | INFO  | Task c58bbfa3-78c1-4417-9f24-8ea71ef98d68 is in state STARTED 2025-05-23 00:58:37.094981 | orchestrator | 2025-05-23 00:58:37 | INFO  | Task 42f4660f-478e-46d6-82e9-aa3bc64c4371 is in state STARTED 2025-05-23 00:58:37.098367 | orchestrator | 2025-05-23 00:58:37 | INFO  | Task 0cd617fa-b1f4-4bd2-8627-6ab0fa7b48bb is in state STARTED 2025-05-23 00:58:37.098395 | orchestrator | 2025-05-23 00:58:37 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:58:40.170577 | orchestrator | 2025-05-23 00:58:40 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:58:40.175894 | orchestrator | 2025-05-23 00:58:40 | INFO  | Task c58bbfa3-78c1-4417-9f24-8ea71ef98d68 is in state STARTED 2025-05-23 00:58:40.178393 | orchestrator | 2025-05-23 00:58:40 | INFO  | Task 42f4660f-478e-46d6-82e9-aa3bc64c4371 is in state STARTED 2025-05-23 00:58:40.179980 | orchestrator | 2025-05-23 00:58:40 | INFO  | Task 0cd617fa-b1f4-4bd2-8627-6ab0fa7b48bb is in state STARTED 2025-05-23 00:58:40.180010 | orchestrator | 2025-05-23 00:58:40 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:58:43.257196 | orchestrator | 2025-05-23 00:58:43 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:58:43.258331 | orchestrator | 2025-05-23 00:58:43 | INFO  | Task c58bbfa3-78c1-4417-9f24-8ea71ef98d68 is in state STARTED 2025-05-23 00:58:43.259372 | orchestrator | 2025-05-23 00:58:43 | INFO  | Task 42f4660f-478e-46d6-82e9-aa3bc64c4371 is in state STARTED 2025-05-23 00:58:43.260029 | orchestrator | 2025-05-23 00:58:43 | INFO  | Task 0cd617fa-b1f4-4bd2-8627-6ab0fa7b48bb is in state STARTED 2025-05-23 00:58:43.260051 | orchestrator | 2025-05-23 00:58:43 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:58:46.310712 | orchestrator | 2025-05-23 00:58:46 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:58:46.310991 | orchestrator | 2025-05-23 00:58:46 | INFO  | Task c58bbfa3-78c1-4417-9f24-8ea71ef98d68 is in state STARTED 2025-05-23 00:58:46.311702 | orchestrator | 2025-05-23 00:58:46 | INFO  | Task 42f4660f-478e-46d6-82e9-aa3bc64c4371 is in state STARTED 2025-05-23 00:58:46.312319 | orchestrator | 2025-05-23 00:58:46 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 00:58:46.313187 | orchestrator | 2025-05-23 00:58:46 | INFO  | Task 1c5e4ace-9f6f-460b-ad08-e3b8c6281ab1 is in state STARTED 2025-05-23 00:58:46.319636 | orchestrator | 2025-05-23 00:58:46 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 00:58:46.320927 | orchestrator | 2025-05-23 00:58:46 | INFO  | Task 0cd617fa-b1f4-4bd2-8627-6ab0fa7b48bb is in state SUCCESS 2025-05-23 00:58:46.322618 | orchestrator | 2025-05-23 00:58:46.322658 | orchestrator | 2025-05-23 00:58:46.322670 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-23 00:58:46.322683 | orchestrator | 2025-05-23 00:58:46.322694 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-23 00:58:46.322707 | orchestrator | Friday 23 May 2025 00:56:15 +0000 (0:00:00.360) 0:00:00.360 ************ 2025-05-23 00:58:46.322742 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:58:46.322755 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:58:46.322766 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:58:46.322777 | orchestrator | 2025-05-23 00:58:46.322787 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-23 00:58:46.322800 | orchestrator | Friday 23 May 2025 00:56:16 +0000 (0:00:00.415) 0:00:00.775 ************ 2025-05-23 00:58:46.322811 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-05-23 00:58:46.322822 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-05-23 00:58:46.322833 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-05-23 00:58:46.322843 | orchestrator | 2025-05-23 00:58:46.322854 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-05-23 00:58:46.322864 | orchestrator | 2025-05-23 00:58:46.322875 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-23 00:58:46.322886 | orchestrator | Friday 23 May 2025 00:56:16 +0000 (0:00:00.313) 0:00:01.089 ************ 2025-05-23 00:58:46.322896 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:58:46.322908 | orchestrator | 2025-05-23 00:58:46.322919 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-05-23 00:58:46.322929 | orchestrator | Friday 23 May 2025 00:56:17 +0000 (0:00:00.811) 0:00:01.901 ************ 2025-05-23 00:58:46.322960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-23 00:58:46.323471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-23 00:58:46.323538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-23 00:58:46.323567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-23 00:58:46.323579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-23 00:58:46.323599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-23 00:58:46.323611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-23 00:58:46.323622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-23 00:58:46.323633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-23 00:58:46.323652 | orchestrator | 2025-05-23 00:58:46.323664 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-05-23 00:58:46.323681 | orchestrator | Friday 23 May 2025 00:56:19 +0000 (0:00:02.551) 0:00:04.453 ************ 2025-05-23 00:58:46.323692 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-05-23 00:58:46.323703 | orchestrator | 2025-05-23 00:58:46.323714 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-05-23 00:58:46.323725 | orchestrator | Friday 23 May 2025 00:56:20 +0000 (0:00:00.563) 0:00:05.016 ************ 2025-05-23 00:58:46.323736 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:58:46.323747 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:58:46.323757 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:58:46.323768 | orchestrator | 2025-05-23 00:58:46.323779 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-05-23 00:58:46.323790 | orchestrator | Friday 23 May 2025 00:56:20 +0000 (0:00:00.450) 0:00:05.466 ************ 2025-05-23 00:58:46.323801 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-23 00:58:46.323812 | orchestrator | 2025-05-23 00:58:46.323824 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-23 00:58:46.323845 | orchestrator | Friday 23 May 2025 00:56:21 +0000 (0:00:00.395) 0:00:05.861 ************ 2025-05-23 00:58:46.323863 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:58:46.323882 | orchestrator | 2025-05-23 00:58:46.323901 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-05-23 00:58:46.323920 | orchestrator | Friday 23 May 2025 00:56:22 +0000 (0:00:00.672) 0:00:06.534 ************ 2025-05-23 00:58:46.323949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-23 00:58:46.323963 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-23 00:58:46.324004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-23 00:58:46.324018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-23 00:58:46.324030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-23 00:58:46.324046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-23 00:58:46.324058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-23 00:58:46.324071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-23 00:58:46.324089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-23 00:58:46.324102 | orchestrator | 2025-05-23 00:58:46.324114 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-05-23 00:58:46.324127 | orchestrator | Friday 23 May 2025 00:56:25 +0000 (0:00:03.436) 0:00:09.971 ************ 2025-05-23 00:58:46.324148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-23 00:58:46.324162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-23 00:58:46.324179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-23 00:58:46.324192 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:58:46.324207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-23 00:58:46.324251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-23 00:58:46.324276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-23 00:58:46.324289 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:58:46.324303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-23 00:58:46.324321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-23 00:58:46.324335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-23 00:58:46.324353 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:58:46.324366 | orchestrator | 2025-05-23 00:58:46.324379 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-05-23 00:58:46.324391 | orchestrator | Friday 23 May 2025 00:56:26 +0000 (0:00:00.767) 0:00:10.738 ************ 2025-05-23 00:58:46.324404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-23 00:58:46.324425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-23 00:58:46.324439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-23 00:58:46.324450 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:58:46.324467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-23 00:58:46.324512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-23 00:58:46.324524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-23 00:58:46.324535 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:58:46.324556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-23 00:58:46.324569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-23 00:58:46.324581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-23 00:58:46.324592 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:58:46.324610 | orchestrator | 2025-05-23 00:58:46.324621 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-05-23 00:58:46.324633 | orchestrator | Friday 23 May 2025 00:56:27 +0000 (0:00:01.151) 0:00:11.890 ************ 2025-05-23 00:58:46.324649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-23 00:58:46.324662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-23 00:58:46.324681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-23 00:58:46.324694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-23 00:58:46.324716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-23 00:58:46.324728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-23 00:58:46.324739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-23 00:58:46.324751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-23 00:58:46.324768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-23 00:58:46.324780 | orchestrator | 2025-05-23 00:58:46.324791 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-05-23 00:58:46.324801 | orchestrator | Friday 23 May 2025 00:56:31 +0000 (0:00:03.577) 0:00:15.467 ************ 2025-05-23 00:58:46.324813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-23 00:58:46.324836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-23 00:58:46.324848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-23 00:58:46.324860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-23 00:58:46.324878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-23 00:58:46.324890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-23 00:58:46.324912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-23 00:58:46.324924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-23 00:58:46.324935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-23 00:58:46.324946 | orchestrator | 2025-05-23 00:58:46.324957 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-05-23 00:58:46.324968 | orchestrator | Friday 23 May 2025 00:56:37 +0000 (0:00:06.795) 0:00:22.263 ************ 2025-05-23 00:58:46.324979 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:58:46.324990 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:58:46.325000 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:58:46.325011 | orchestrator | 2025-05-23 00:58:46.325022 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-05-23 00:58:46.325032 | orchestrator | Friday 23 May 2025 00:56:40 +0000 (0:00:02.261) 0:00:24.524 ************ 2025-05-23 00:58:46.325043 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:58:46.325054 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:58:46.325065 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:58:46.325075 | orchestrator | 2025-05-23 00:58:46.325091 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-05-23 00:58:46.325103 | orchestrator | Friday 23 May 2025 00:56:41 +0000 (0:00:01.292) 0:00:25.816 ************ 2025-05-23 00:58:46.325113 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:58:46.325124 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:58:46.325135 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:58:46.325145 | orchestrator | 2025-05-23 00:58:46.325156 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-05-23 00:58:46.325167 | orchestrator | Friday 23 May 2025 00:56:42 +0000 (0:00:00.931) 0:00:26.748 ************ 2025-05-23 00:58:46.325178 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:58:46.325195 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:58:46.325205 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:58:46.325216 | orchestrator | 2025-05-23 00:58:46.325226 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-05-23 00:58:46.325305 | orchestrator | Friday 23 May 2025 00:56:42 +0000 (0:00:00.483) 0:00:27.232 ************ 2025-05-23 00:58:46.325318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-23 00:58:46.325335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-23 00:58:46.325348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-23 00:58:46.325360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-23 00:58:46.325380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-23 00:58:46.325400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-23 00:58:46.325416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-23 00:58:46.325427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-23 00:58:46.325439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-23 00:58:46.325450 | orchestrator | 2025-05-23 00:58:46.325461 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-23 00:58:46.325472 | orchestrator | Friday 23 May 2025 00:56:45 +0000 (0:00:02.687) 0:00:29.920 ************ 2025-05-23 00:58:46.325482 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:58:46.325493 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:58:46.325504 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:58:46.325514 | orchestrator | 2025-05-23 00:58:46.325525 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-05-23 00:58:46.325536 | orchestrator | Friday 23 May 2025 00:56:45 +0000 (0:00:00.300) 0:00:30.220 ************ 2025-05-23 00:58:46.325553 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-23 00:58:46.325564 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-23 00:58:46.325580 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-23 00:58:46.325592 | orchestrator | 2025-05-23 00:58:46.325602 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-05-23 00:58:46.325613 | orchestrator | Friday 23 May 2025 00:56:47 +0000 (0:00:01.985) 0:00:32.205 ************ 2025-05-23 00:58:46.325624 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-23 00:58:46.325635 | orchestrator | 2025-05-23 00:58:46.325645 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-05-23 00:58:46.325656 | orchestrator | Friday 23 May 2025 00:56:48 +0000 (0:00:00.609) 0:00:32.815 ************ 2025-05-23 00:58:46.325666 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:58:46.325677 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:58:46.325688 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:58:46.325698 | orchestrator | 2025-05-23 00:58:46.325709 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-05-23 00:58:46.325720 | orchestrator | Friday 23 May 2025 00:56:49 +0000 (0:00:01.303) 0:00:34.119 ************ 2025-05-23 00:58:46.325730 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-23 00:58:46.325741 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-23 00:58:46.325752 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-23 00:58:46.325763 | orchestrator | 2025-05-23 00:58:46.325773 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-05-23 00:58:46.325784 | orchestrator | Friday 23 May 2025 00:56:50 +0000 (0:00:01.330) 0:00:35.449 ************ 2025-05-23 00:58:46.325795 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:58:46.325805 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:58:46.325814 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:58:46.325824 | orchestrator | 2025-05-23 00:58:46.325833 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-05-23 00:58:46.325843 | orchestrator | Friday 23 May 2025 00:56:51 +0000 (0:00:00.475) 0:00:35.925 ************ 2025-05-23 00:58:46.325853 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-23 00:58:46.325862 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-23 00:58:46.325871 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-23 00:58:46.325881 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-23 00:58:46.325891 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-23 00:58:46.325900 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-23 00:58:46.325914 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-23 00:58:46.325924 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-23 00:58:46.325934 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-23 00:58:46.325943 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-23 00:58:46.325953 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-23 00:58:46.325962 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-23 00:58:46.325971 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-23 00:58:46.325981 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-23 00:58:46.326003 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-23 00:58:46.326013 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-23 00:58:46.326072 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-23 00:58:46.326082 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-23 00:58:46.326092 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-23 00:58:46.326101 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-23 00:58:46.326111 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-23 00:58:46.326120 | orchestrator | 2025-05-23 00:58:46.326130 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-05-23 00:58:46.326139 | orchestrator | Friday 23 May 2025 00:57:02 +0000 (0:00:10.609) 0:00:46.535 ************ 2025-05-23 00:58:46.326148 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-23 00:58:46.326158 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-23 00:58:46.326167 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-23 00:58:46.326177 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-23 00:58:46.326186 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-23 00:58:46.326202 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-23 00:58:46.326212 | orchestrator | 2025-05-23 00:58:46.326222 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-05-23 00:58:46.326258 | orchestrator | Friday 23 May 2025 00:57:05 +0000 (0:00:03.195) 0:00:49.730 ************ 2025-05-23 00:58:46.326270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-23 00:58:46.326286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-23 00:58:46.326305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-23 00:58:46.326315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-23 00:58:46.326334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-23 00:58:46.326344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-23 00:58:46.326369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-23 00:58:46.326384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-23 00:58:46.326400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-23 00:58:46.326410 | orchestrator | 2025-05-23 00:58:46.326420 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-23 00:58:46.326430 | orchestrator | Friday 23 May 2025 00:57:08 +0000 (0:00:02.853) 0:00:52.583 ************ 2025-05-23 00:58:46.326439 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:58:46.326449 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:58:46.326459 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:58:46.326468 | orchestrator | 2025-05-23 00:58:46.326488 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-05-23 00:58:46.326498 | orchestrator | Friday 23 May 2025 00:57:08 +0000 (0:00:00.286) 0:00:52.870 ************ 2025-05-23 00:58:46.326507 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:58:46.326517 | orchestrator | 2025-05-23 00:58:46.326526 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-05-23 00:58:46.326536 | orchestrator | Friday 23 May 2025 00:57:10 +0000 (0:00:02.565) 0:00:55.435 ************ 2025-05-23 00:58:46.326545 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:58:46.326555 | orchestrator | 2025-05-23 00:58:46.326564 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-05-23 00:58:46.326573 | orchestrator | Friday 23 May 2025 00:57:13 +0000 (0:00:02.230) 0:00:57.666 ************ 2025-05-23 00:58:46.326583 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:58:46.326592 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:58:46.326601 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:58:46.326611 | orchestrator | 2025-05-23 00:58:46.326620 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-05-23 00:58:46.326630 | orchestrator | Friday 23 May 2025 00:57:14 +0000 (0:00:00.958) 0:00:58.625 ************ 2025-05-23 00:58:46.326639 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:58:46.326653 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:58:46.326663 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:58:46.326673 | orchestrator | 2025-05-23 00:58:46.326683 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-05-23 00:58:46.326692 | orchestrator | Friday 23 May 2025 00:57:14 +0000 (0:00:00.316) 0:00:58.941 ************ 2025-05-23 00:58:46.326701 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:58:46.326711 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:58:46.326721 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:58:46.326730 | orchestrator | 2025-05-23 00:58:46.326740 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-05-23 00:58:46.326749 | orchestrator | Friday 23 May 2025 00:57:14 +0000 (0:00:00.503) 0:00:59.445 ************ 2025-05-23 00:58:46.326758 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:58:46.326768 | orchestrator | 2025-05-23 00:58:46.326777 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-05-23 00:58:46.326787 | orchestrator | Friday 23 May 2025 00:57:28 +0000 (0:00:13.071) 0:01:12.517 ************ 2025-05-23 00:58:46.326803 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:58:46.326812 | orchestrator | 2025-05-23 00:58:46.326822 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-23 00:58:46.326831 | orchestrator | Friday 23 May 2025 00:57:36 +0000 (0:00:08.660) 0:01:21.177 ************ 2025-05-23 00:58:46.326841 | orchestrator | 2025-05-23 00:58:46.326851 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-23 00:58:46.326860 | orchestrator | Friday 23 May 2025 00:57:36 +0000 (0:00:00.068) 0:01:21.246 ************ 2025-05-23 00:58:46.326869 | orchestrator | 2025-05-23 00:58:46.326879 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-23 00:58:46.326888 | orchestrator | Friday 23 May 2025 00:57:36 +0000 (0:00:00.052) 0:01:21.298 ************ 2025-05-23 00:58:46.326897 | orchestrator | 2025-05-23 00:58:46.326907 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-05-23 00:58:46.326916 | orchestrator | Friday 23 May 2025 00:57:36 +0000 (0:00:00.054) 0:01:21.353 ************ 2025-05-23 00:58:46.326926 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:58:46.326935 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:58:46.326945 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:58:46.326954 | orchestrator | 2025-05-23 00:58:46.326964 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-05-23 00:58:46.326973 | orchestrator | Friday 23 May 2025 00:57:45 +0000 (0:00:08.809) 0:01:30.162 ************ 2025-05-23 00:58:46.326983 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:58:46.326992 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:58:46.327001 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:58:46.327011 | orchestrator | 2025-05-23 00:58:46.327020 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-05-23 00:58:46.327030 | orchestrator | Friday 23 May 2025 00:57:50 +0000 (0:00:04.850) 0:01:35.012 ************ 2025-05-23 00:58:46.327044 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:58:46.327053 | orchestrator | changed: [testbed-node-1] 2025-05-23 00:58:46.327063 | orchestrator | changed: [testbed-node-2] 2025-05-23 00:58:46.327072 | orchestrator | 2025-05-23 00:58:46.327082 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-23 00:58:46.327091 | orchestrator | Friday 23 May 2025 00:58:00 +0000 (0:00:10.057) 0:01:45.069 ************ 2025-05-23 00:58:46.327101 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 00:58:46.327110 | orchestrator | 2025-05-23 00:58:46.327120 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-05-23 00:58:46.327129 | orchestrator | Friday 23 May 2025 00:58:01 +0000 (0:00:00.893) 0:01:45.963 ************ 2025-05-23 00:58:46.327138 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:58:46.327148 | orchestrator | ok: [testbed-node-1] 2025-05-23 00:58:46.327157 | orchestrator | ok: [testbed-node-2] 2025-05-23 00:58:46.327166 | orchestrator | 2025-05-23 00:58:46.327176 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-05-23 00:58:46.327185 | orchestrator | Friday 23 May 2025 00:58:02 +0000 (0:00:01.016) 0:01:46.979 ************ 2025-05-23 00:58:46.327195 | orchestrator | changed: [testbed-node-0] 2025-05-23 00:58:46.327204 | orchestrator | 2025-05-23 00:58:46.327214 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-05-23 00:58:46.327223 | orchestrator | Friday 23 May 2025 00:58:04 +0000 (0:00:01.497) 0:01:48.477 ************ 2025-05-23 00:58:46.327258 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-05-23 00:58:46.327276 | orchestrator | 2025-05-23 00:58:46.327294 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-05-23 00:58:46.327311 | orchestrator | Friday 23 May 2025 00:58:12 +0000 (0:00:08.753) 0:01:57.230 ************ 2025-05-23 00:58:46.327322 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-05-23 00:58:46.327331 | orchestrator | 2025-05-23 00:58:46.327341 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-05-23 00:58:46.327357 | orchestrator | Friday 23 May 2025 00:58:32 +0000 (0:00:19.926) 0:02:17.157 ************ 2025-05-23 00:58:46.327366 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-05-23 00:58:46.327376 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-05-23 00:58:46.327385 | orchestrator | 2025-05-23 00:58:46.327394 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-05-23 00:58:46.327404 | orchestrator | Friday 23 May 2025 00:58:39 +0000 (0:00:06.917) 0:02:24.074 ************ 2025-05-23 00:58:46.327413 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:58:46.327426 | orchestrator | 2025-05-23 00:58:46.327446 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-05-23 00:58:46.327471 | orchestrator | Friday 23 May 2025 00:58:39 +0000 (0:00:00.123) 0:02:24.198 ************ 2025-05-23 00:58:46.327485 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:58:46.327500 | orchestrator | 2025-05-23 00:58:46.327515 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-05-23 00:58:46.327539 | orchestrator | Friday 23 May 2025 00:58:39 +0000 (0:00:00.113) 0:02:24.312 ************ 2025-05-23 00:58:46.327554 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:58:46.327569 | orchestrator | 2025-05-23 00:58:46.327585 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-05-23 00:58:46.327600 | orchestrator | Friday 23 May 2025 00:58:39 +0000 (0:00:00.140) 0:02:24.453 ************ 2025-05-23 00:58:46.327617 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:58:46.327629 | orchestrator | 2025-05-23 00:58:46.327638 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-05-23 00:58:46.327648 | orchestrator | Friday 23 May 2025 00:58:40 +0000 (0:00:00.474) 0:02:24.927 ************ 2025-05-23 00:58:46.327657 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:58:46.327667 | orchestrator | 2025-05-23 00:58:46.327676 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-23 00:58:46.327686 | orchestrator | Friday 23 May 2025 00:58:43 +0000 (0:00:03.343) 0:02:28.270 ************ 2025-05-23 00:58:46.327695 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:58:46.327705 | orchestrator | skipping: [testbed-node-1] 2025-05-23 00:58:46.327714 | orchestrator | skipping: [testbed-node-2] 2025-05-23 00:58:46.327724 | orchestrator | 2025-05-23 00:58:46.327733 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 00:58:46.327743 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-23 00:58:46.327754 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-23 00:58:46.327764 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-23 00:58:46.327773 | orchestrator | 2025-05-23 00:58:46.327782 | orchestrator | 2025-05-23 00:58:46.327792 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-23 00:58:46.327801 | orchestrator | Friday 23 May 2025 00:58:44 +0000 (0:00:00.630) 0:02:28.901 ************ 2025-05-23 00:58:46.327811 | orchestrator | =============================================================================== 2025-05-23 00:58:46.327820 | orchestrator | service-ks-register : keystone | Creating services --------------------- 19.93s 2025-05-23 00:58:46.327829 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.07s 2025-05-23 00:58:46.327838 | orchestrator | keystone : Copying files for keystone-fernet --------------------------- 10.61s 2025-05-23 00:58:46.327848 | orchestrator | keystone : Restart keystone container ---------------------------------- 10.06s 2025-05-23 00:58:46.327863 | orchestrator | keystone : Restart keystone-ssh container ------------------------------- 8.81s 2025-05-23 00:58:46.327881 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint ---- 8.75s 2025-05-23 00:58:46.327890 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 8.66s 2025-05-23 00:58:46.327900 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.92s 2025-05-23 00:58:46.327909 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 6.80s 2025-05-23 00:58:46.327918 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 4.85s 2025-05-23 00:58:46.327928 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.58s 2025-05-23 00:58:46.327937 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.44s 2025-05-23 00:58:46.327947 | orchestrator | keystone : Creating default user role ----------------------------------- 3.34s 2025-05-23 00:58:46.327956 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.20s 2025-05-23 00:58:46.327965 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.85s 2025-05-23 00:58:46.327975 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.69s 2025-05-23 00:58:46.327984 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.57s 2025-05-23 00:58:46.327993 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.55s 2025-05-23 00:58:46.328003 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 2.26s 2025-05-23 00:58:46.328012 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.23s 2025-05-23 00:58:46.328021 | orchestrator | 2025-05-23 00:58:46 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 00:58:46.328031 | orchestrator | 2025-05-23 00:58:46 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:58:49.360955 | orchestrator | 2025-05-23 00:58:49 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:58:49.361192 | orchestrator | 2025-05-23 00:58:49 | INFO  | Task c58bbfa3-78c1-4417-9f24-8ea71ef98d68 is in state STARTED 2025-05-23 00:58:49.361989 | orchestrator | 2025-05-23 00:58:49 | INFO  | Task 42f4660f-478e-46d6-82e9-aa3bc64c4371 is in state STARTED 2025-05-23 00:58:49.362624 | orchestrator | 2025-05-23 00:58:49 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 00:58:49.363908 | orchestrator | 2025-05-23 00:58:49 | INFO  | Task 1c5e4ace-9f6f-460b-ad08-e3b8c6281ab1 is in state STARTED 2025-05-23 00:58:49.363932 | orchestrator | 2025-05-23 00:58:49 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 00:58:49.364293 | orchestrator | 2025-05-23 00:58:49 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 00:58:49.364317 | orchestrator | 2025-05-23 00:58:49 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:58:52.412875 | orchestrator | 2025-05-23 00:58:52 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:58:52.414199 | orchestrator | 2025-05-23 00:58:52 | INFO  | Task c58bbfa3-78c1-4417-9f24-8ea71ef98d68 is in state STARTED 2025-05-23 00:58:52.415629 | orchestrator | 2025-05-23 00:58:52 | INFO  | Task 42f4660f-478e-46d6-82e9-aa3bc64c4371 is in state STARTED 2025-05-23 00:58:52.416672 | orchestrator | 2025-05-23 00:58:52 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 00:58:52.417954 | orchestrator | 2025-05-23 00:58:52 | INFO  | Task 1c5e4ace-9f6f-460b-ad08-e3b8c6281ab1 is in state STARTED 2025-05-23 00:58:52.419032 | orchestrator | 2025-05-23 00:58:52 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 00:58:52.420142 | orchestrator | 2025-05-23 00:58:52 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 00:58:52.420255 | orchestrator | 2025-05-23 00:58:52 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:58:55.452527 | orchestrator | 2025-05-23 00:58:55 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:58:55.454689 | orchestrator | 2025-05-23 00:58:55 | INFO  | Task c58bbfa3-78c1-4417-9f24-8ea71ef98d68 is in state STARTED 2025-05-23 00:58:55.456092 | orchestrator | 2025-05-23 00:58:55 | INFO  | Task 42f4660f-478e-46d6-82e9-aa3bc64c4371 is in state STARTED 2025-05-23 00:58:55.457473 | orchestrator | 2025-05-23 00:58:55 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 00:58:55.459033 | orchestrator | 2025-05-23 00:58:55 | INFO  | Task 1c5e4ace-9f6f-460b-ad08-e3b8c6281ab1 is in state STARTED 2025-05-23 00:58:55.460458 | orchestrator | 2025-05-23 00:58:55 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 00:58:55.462734 | orchestrator | 2025-05-23 00:58:55 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 00:58:55.462765 | orchestrator | 2025-05-23 00:58:55 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:58:58.511570 | orchestrator | 2025-05-23 00:58:58 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:58:58.511819 | orchestrator | 2025-05-23 00:58:58 | INFO  | Task c58bbfa3-78c1-4417-9f24-8ea71ef98d68 is in state STARTED 2025-05-23 00:58:58.513593 | orchestrator | 2025-05-23 00:58:58 | INFO  | Task 42f4660f-478e-46d6-82e9-aa3bc64c4371 is in state STARTED 2025-05-23 00:58:58.514838 | orchestrator | 2025-05-23 00:58:58 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 00:58:58.515576 | orchestrator | 2025-05-23 00:58:58 | INFO  | Task 1c5e4ace-9f6f-460b-ad08-e3b8c6281ab1 is in state STARTED 2025-05-23 00:58:58.516730 | orchestrator | 2025-05-23 00:58:58 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 00:58:58.517538 | orchestrator | 2025-05-23 00:58:58 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 00:58:58.517812 | orchestrator | 2025-05-23 00:58:58 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:59:01.566529 | orchestrator | 2025-05-23 00:59:01 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:59:01.567757 | orchestrator | 2025-05-23 00:59:01 | INFO  | Task c58bbfa3-78c1-4417-9f24-8ea71ef98d68 is in state STARTED 2025-05-23 00:59:01.569627 | orchestrator | 2025-05-23 00:59:01 | INFO  | Task 42f4660f-478e-46d6-82e9-aa3bc64c4371 is in state STARTED 2025-05-23 00:59:01.571139 | orchestrator | 2025-05-23 00:59:01 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 00:59:01.572576 | orchestrator | 2025-05-23 00:59:01 | INFO  | Task 1c5e4ace-9f6f-460b-ad08-e3b8c6281ab1 is in state STARTED 2025-05-23 00:59:01.573679 | orchestrator | 2025-05-23 00:59:01 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 00:59:01.574692 | orchestrator | 2025-05-23 00:59:01 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 00:59:01.574738 | orchestrator | 2025-05-23 00:59:01 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:59:04.626320 | orchestrator | 2025-05-23 00:59:04 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:59:04.626981 | orchestrator | 2025-05-23 00:59:04 | INFO  | Task c58bbfa3-78c1-4417-9f24-8ea71ef98d68 is in state STARTED 2025-05-23 00:59:04.628710 | orchestrator | 2025-05-23 00:59:04 | INFO  | Task 42f4660f-478e-46d6-82e9-aa3bc64c4371 is in state STARTED 2025-05-23 00:59:04.629465 | orchestrator | 2025-05-23 00:59:04 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 00:59:04.630849 | orchestrator | 2025-05-23 00:59:04 | INFO  | Task 1c5e4ace-9f6f-460b-ad08-e3b8c6281ab1 is in state STARTED 2025-05-23 00:59:04.632676 | orchestrator | 2025-05-23 00:59:04 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 00:59:04.635531 | orchestrator | 2025-05-23 00:59:04 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 00:59:04.635877 | orchestrator | 2025-05-23 00:59:04 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:59:07.700445 | orchestrator | 2025-05-23 00:59:07 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:59:07.701469 | orchestrator | 2025-05-23 00:59:07 | INFO  | Task c58bbfa3-78c1-4417-9f24-8ea71ef98d68 is in state STARTED 2025-05-23 00:59:07.703104 | orchestrator | 2025-05-23 00:59:07 | INFO  | Task 42f4660f-478e-46d6-82e9-aa3bc64c4371 is in state SUCCESS 2025-05-23 00:59:07.703667 | orchestrator | 2025-05-23 00:59:07.704886 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-23 00:59:07.704921 | orchestrator | 2025-05-23 00:59:07.704933 | orchestrator | PLAY [Apply role fetch-keys] *************************************************** 2025-05-23 00:59:07.704944 | orchestrator | 2025-05-23 00:59:07.704955 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-05-23 00:59:07.704966 | orchestrator | Friday 23 May 2025 00:58:39 +0000 (0:00:00.473) 0:00:00.473 ************ 2025-05-23 00:59:07.704977 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-0 2025-05-23 00:59:07.704988 | orchestrator | 2025-05-23 00:59:07.704999 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-05-23 00:59:07.705009 | orchestrator | Friday 23 May 2025 00:58:39 +0000 (0:00:00.215) 0:00:00.689 ************ 2025-05-23 00:59:07.705021 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-23 00:59:07.705032 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-1) 2025-05-23 00:59:07.705059 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-2) 2025-05-23 00:59:07.705070 | orchestrator | 2025-05-23 00:59:07.705081 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-05-23 00:59:07.705091 | orchestrator | Friday 23 May 2025 00:58:40 +0000 (0:00:00.891) 0:00:01.580 ************ 2025-05-23 00:59:07.705102 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2025-05-23 00:59:07.705113 | orchestrator | 2025-05-23 00:59:07.705123 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-05-23 00:59:07.705134 | orchestrator | Friday 23 May 2025 00:58:40 +0000 (0:00:00.230) 0:00:01.811 ************ 2025-05-23 00:59:07.705145 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:59:07.705155 | orchestrator | 2025-05-23 00:59:07.705317 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-05-23 00:59:07.705328 | orchestrator | Friday 23 May 2025 00:58:41 +0000 (0:00:00.620) 0:00:02.431 ************ 2025-05-23 00:59:07.705339 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:59:07.705350 | orchestrator | 2025-05-23 00:59:07.705361 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-05-23 00:59:07.705372 | orchestrator | Friday 23 May 2025 00:58:41 +0000 (0:00:00.132) 0:00:02.563 ************ 2025-05-23 00:59:07.705382 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:59:07.705393 | orchestrator | 2025-05-23 00:59:07.705404 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-05-23 00:59:07.705415 | orchestrator | Friday 23 May 2025 00:58:41 +0000 (0:00:00.448) 0:00:03.012 ************ 2025-05-23 00:59:07.705425 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:59:07.705436 | orchestrator | 2025-05-23 00:59:07.705447 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-05-23 00:59:07.705482 | orchestrator | Friday 23 May 2025 00:58:41 +0000 (0:00:00.137) 0:00:03.149 ************ 2025-05-23 00:59:07.705493 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:59:07.705504 | orchestrator | 2025-05-23 00:59:07.705514 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-05-23 00:59:07.705525 | orchestrator | Friday 23 May 2025 00:58:42 +0000 (0:00:00.153) 0:00:03.302 ************ 2025-05-23 00:59:07.705536 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:59:07.705546 | orchestrator | 2025-05-23 00:59:07.705562 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-05-23 00:59:07.705582 | orchestrator | Friday 23 May 2025 00:58:42 +0000 (0:00:00.150) 0:00:03.452 ************ 2025-05-23 00:59:07.705593 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:59:07.705606 | orchestrator | 2025-05-23 00:59:07.705624 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-05-23 00:59:07.705644 | orchestrator | Friday 23 May 2025 00:58:42 +0000 (0:00:00.150) 0:00:03.602 ************ 2025-05-23 00:59:07.705663 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:59:07.705682 | orchestrator | 2025-05-23 00:59:07.705702 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-23 00:59:07.705721 | orchestrator | Friday 23 May 2025 00:58:42 +0000 (0:00:00.123) 0:00:03.726 ************ 2025-05-23 00:59:07.705739 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-23 00:59:07.705752 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-23 00:59:07.705763 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-23 00:59:07.705773 | orchestrator | 2025-05-23 00:59:07.705784 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-05-23 00:59:07.705795 | orchestrator | Friday 23 May 2025 00:58:43 +0000 (0:00:00.929) 0:00:04.656 ************ 2025-05-23 00:59:07.705805 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:59:07.705816 | orchestrator | 2025-05-23 00:59:07.705826 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-05-23 00:59:07.705837 | orchestrator | Friday 23 May 2025 00:58:43 +0000 (0:00:00.258) 0:00:04.914 ************ 2025-05-23 00:59:07.705847 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-23 00:59:07.705858 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-23 00:59:07.705869 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-23 00:59:07.705879 | orchestrator | 2025-05-23 00:59:07.705890 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-05-23 00:59:07.705900 | orchestrator | Friday 23 May 2025 00:58:45 +0000 (0:00:01.991) 0:00:06.906 ************ 2025-05-23 00:59:07.705911 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-23 00:59:07.705922 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-23 00:59:07.705934 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-23 00:59:07.705947 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:59:07.705960 | orchestrator | 2025-05-23 00:59:07.705972 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-05-23 00:59:07.706000 | orchestrator | Friday 23 May 2025 00:58:46 +0000 (0:00:00.487) 0:00:07.394 ************ 2025-05-23 00:59:07.706064 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-23 00:59:07.706084 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-23 00:59:07.706105 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-23 00:59:07.706129 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:59:07.706142 | orchestrator | 2025-05-23 00:59:07.706155 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-05-23 00:59:07.706167 | orchestrator | Friday 23 May 2025 00:58:47 +0000 (0:00:00.920) 0:00:08.315 ************ 2025-05-23 00:59:07.706182 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-23 00:59:07.706197 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-23 00:59:07.706250 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-23 00:59:07.706264 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:59:07.706277 | orchestrator | 2025-05-23 00:59:07.706288 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-05-23 00:59:07.706298 | orchestrator | Friday 23 May 2025 00:58:47 +0000 (0:00:00.187) 0:00:08.502 ************ 2025-05-23 00:59:07.706312 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': 'a5f5aa308057', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-23 00:58:44.346970', 'end': '2025-05-23 00:58:44.391245', 'delta': '0:00:00.044275', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a5f5aa308057'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-05-23 00:59:07.706326 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '180478cf69c8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-23 00:58:44.917251', 'end': '2025-05-23 00:58:44.960923', 'delta': '0:00:00.043672', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['180478cf69c8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-05-23 00:59:07.706349 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '269b56f838c0', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-23 00:58:45.481988', 'end': '2025-05-23 00:58:45.521061', 'delta': '0:00:00.039073', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['269b56f838c0'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-05-23 00:59:07.706368 | orchestrator | 2025-05-23 00:59:07.706379 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-05-23 00:59:07.706475 | orchestrator | Friday 23 May 2025 00:58:47 +0000 (0:00:00.224) 0:00:08.726 ************ 2025-05-23 00:59:07.706494 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:59:07.706505 | orchestrator | 2025-05-23 00:59:07.706516 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-05-23 00:59:07.706527 | orchestrator | Friday 23 May 2025 00:58:47 +0000 (0:00:00.296) 0:00:09.023 ************ 2025-05-23 00:59:07.706537 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2025-05-23 00:59:07.706548 | orchestrator | 2025-05-23 00:59:07.706559 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-05-23 00:59:07.706570 | orchestrator | Friday 23 May 2025 00:58:50 +0000 (0:00:02.611) 0:00:11.634 ************ 2025-05-23 00:59:07.706581 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:59:07.706592 | orchestrator | 2025-05-23 00:59:07.706602 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-05-23 00:59:07.706613 | orchestrator | Friday 23 May 2025 00:58:50 +0000 (0:00:00.126) 0:00:11.761 ************ 2025-05-23 00:59:07.706624 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:59:07.706634 | orchestrator | 2025-05-23 00:59:07.706645 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-23 00:59:07.706656 | orchestrator | Friday 23 May 2025 00:58:50 +0000 (0:00:00.212) 0:00:11.973 ************ 2025-05-23 00:59:07.706669 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:59:07.706688 | orchestrator | 2025-05-23 00:59:07.706706 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-05-23 00:59:07.706726 | orchestrator | Friday 23 May 2025 00:58:50 +0000 (0:00:00.120) 0:00:12.094 ************ 2025-05-23 00:59:07.706746 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:59:07.706765 | orchestrator | 2025-05-23 00:59:07.706783 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-05-23 00:59:07.706796 | orchestrator | Friday 23 May 2025 00:58:50 +0000 (0:00:00.116) 0:00:12.210 ************ 2025-05-23 00:59:07.706807 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:59:07.706818 | orchestrator | 2025-05-23 00:59:07.706828 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-23 00:59:07.706839 | orchestrator | Friday 23 May 2025 00:58:51 +0000 (0:00:00.202) 0:00:12.413 ************ 2025-05-23 00:59:07.706849 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:59:07.706860 | orchestrator | 2025-05-23 00:59:07.706870 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-05-23 00:59:07.706881 | orchestrator | Friday 23 May 2025 00:58:51 +0000 (0:00:00.117) 0:00:12.530 ************ 2025-05-23 00:59:07.706891 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:59:07.706902 | orchestrator | 2025-05-23 00:59:07.706912 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-05-23 00:59:07.706922 | orchestrator | Friday 23 May 2025 00:58:51 +0000 (0:00:00.115) 0:00:12.646 ************ 2025-05-23 00:59:07.706933 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:59:07.706943 | orchestrator | 2025-05-23 00:59:07.706954 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-05-23 00:59:07.706964 | orchestrator | Friday 23 May 2025 00:58:51 +0000 (0:00:00.103) 0:00:12.750 ************ 2025-05-23 00:59:07.706975 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:59:07.706985 | orchestrator | 2025-05-23 00:59:07.706995 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-05-23 00:59:07.707006 | orchestrator | Friday 23 May 2025 00:58:51 +0000 (0:00:00.258) 0:00:13.008 ************ 2025-05-23 00:59:07.707025 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:59:07.707035 | orchestrator | 2025-05-23 00:59:07.707046 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-05-23 00:59:07.707057 | orchestrator | Friday 23 May 2025 00:58:51 +0000 (0:00:00.114) 0:00:13.123 ************ 2025-05-23 00:59:07.707067 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:59:07.707077 | orchestrator | 2025-05-23 00:59:07.707088 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-23 00:59:07.707098 | orchestrator | Friday 23 May 2025 00:58:51 +0000 (0:00:00.113) 0:00:13.236 ************ 2025-05-23 00:59:07.707112 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:59:07.707124 | orchestrator | 2025-05-23 00:59:07.707136 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-05-23 00:59:07.707149 | orchestrator | Friday 23 May 2025 00:58:52 +0000 (0:00:00.120) 0:00:13.357 ************ 2025-05-23 00:59:07.707162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:59:07.707184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:59:07.707199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:59:07.707242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:59:07.707256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:59:07.707269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:59:07.707281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:59:07.707301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-23 00:59:07.707333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80dad4c8-3190-408a-8751-9f09dded29fb', 'scsi-SQEMU_QEMU_HARDDISK_80dad4c8-3190-408a-8751-9f09dded29fb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80dad4c8-3190-408a-8751-9f09dded29fb-part1', 'scsi-SQEMU_QEMU_HARDDISK_80dad4c8-3190-408a-8751-9f09dded29fb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80dad4c8-3190-408a-8751-9f09dded29fb-part14', 'scsi-SQEMU_QEMU_HARDDISK_80dad4c8-3190-408a-8751-9f09dded29fb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80dad4c8-3190-408a-8751-9f09dded29fb-part15', 'scsi-SQEMU_QEMU_HARDDISK_80dad4c8-3190-408a-8751-9f09dded29fb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80dad4c8-3190-408a-8751-9f09dded29fb-part16', 'scsi-SQEMU_QEMU_HARDDISK_80dad4c8-3190-408a-8751-9f09dded29fb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-23 00:59:07.707351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-23-00-02-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-23 00:59:07.707365 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:59:07.707377 | orchestrator | 2025-05-23 00:59:07.707390 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-05-23 00:59:07.707404 | orchestrator | Friday 23 May 2025 00:58:52 +0000 (0:00:00.233) 0:00:13.590 ************ 2025-05-23 00:59:07.707416 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:59:07.707428 | orchestrator | 2025-05-23 00:59:07.707441 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-05-23 00:59:07.707453 | orchestrator | Friday 23 May 2025 00:58:52 +0000 (0:00:00.250) 0:00:13.840 ************ 2025-05-23 00:59:07.707464 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:59:07.707474 | orchestrator | 2025-05-23 00:59:07.707485 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-05-23 00:59:07.707495 | orchestrator | Friday 23 May 2025 00:58:52 +0000 (0:00:00.126) 0:00:13.967 ************ 2025-05-23 00:59:07.707514 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:59:07.707525 | orchestrator | 2025-05-23 00:59:07.707535 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-05-23 00:59:07.707546 | orchestrator | Friday 23 May 2025 00:58:52 +0000 (0:00:00.118) 0:00:14.085 ************ 2025-05-23 00:59:07.707556 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:59:07.707567 | orchestrator | 2025-05-23 00:59:07.707577 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-05-23 00:59:07.707588 | orchestrator | Friday 23 May 2025 00:58:53 +0000 (0:00:00.492) 0:00:14.578 ************ 2025-05-23 00:59:07.707598 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:59:07.707705 | orchestrator | 2025-05-23 00:59:07.707731 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-23 00:59:07.707752 | orchestrator | Friday 23 May 2025 00:58:53 +0000 (0:00:00.133) 0:00:14.712 ************ 2025-05-23 00:59:07.707773 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:59:07.707793 | orchestrator | 2025-05-23 00:59:07.707814 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-23 00:59:07.707832 | orchestrator | Friday 23 May 2025 00:58:53 +0000 (0:00:00.475) 0:00:15.188 ************ 2025-05-23 00:59:07.707844 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:59:07.707855 | orchestrator | 2025-05-23 00:59:07.707865 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-23 00:59:07.707876 | orchestrator | Friday 23 May 2025 00:58:54 +0000 (0:00:00.348) 0:00:15.536 ************ 2025-05-23 00:59:07.707886 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:59:07.707897 | orchestrator | 2025-05-23 00:59:07.707907 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-23 00:59:07.707918 | orchestrator | Friday 23 May 2025 00:58:54 +0000 (0:00:00.212) 0:00:15.749 ************ 2025-05-23 00:59:07.707928 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:59:07.707939 | orchestrator | 2025-05-23 00:59:07.707949 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-05-23 00:59:07.707960 | orchestrator | Friday 23 May 2025 00:58:54 +0000 (0:00:00.153) 0:00:15.903 ************ 2025-05-23 00:59:07.707970 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-23 00:59:07.707981 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-23 00:59:07.707992 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-23 00:59:07.708003 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:59:07.708013 | orchestrator | 2025-05-23 00:59:07.708023 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-05-23 00:59:07.708034 | orchestrator | Friday 23 May 2025 00:58:55 +0000 (0:00:00.432) 0:00:16.335 ************ 2025-05-23 00:59:07.708044 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-23 00:59:07.708055 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-23 00:59:07.708065 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-23 00:59:07.708076 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:59:07.708086 | orchestrator | 2025-05-23 00:59:07.708105 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-05-23 00:59:07.708116 | orchestrator | Friday 23 May 2025 00:58:55 +0000 (0:00:00.428) 0:00:16.764 ************ 2025-05-23 00:59:07.708127 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-23 00:59:07.708138 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-23 00:59:07.708149 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-23 00:59:07.708160 | orchestrator | 2025-05-23 00:59:07.708170 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-05-23 00:59:07.708181 | orchestrator | Friday 23 May 2025 00:58:56 +0000 (0:00:01.030) 0:00:17.795 ************ 2025-05-23 00:59:07.708192 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-23 00:59:07.708202 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-23 00:59:07.708282 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-23 00:59:07.708295 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:59:07.708305 | orchestrator | 2025-05-23 00:59:07.708322 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-05-23 00:59:07.708333 | orchestrator | Friday 23 May 2025 00:58:56 +0000 (0:00:00.198) 0:00:17.993 ************ 2025-05-23 00:59:07.708343 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-23 00:59:07.708354 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-23 00:59:07.708365 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-23 00:59:07.708375 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:59:07.708386 | orchestrator | 2025-05-23 00:59:07.708396 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-05-23 00:59:07.708407 | orchestrator | Friday 23 May 2025 00:58:56 +0000 (0:00:00.224) 0:00:18.218 ************ 2025-05-23 00:59:07.708417 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-05-23 00:59:07.708428 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-23 00:59:07.708440 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-23 00:59:07.708450 | orchestrator | 2025-05-23 00:59:07.708461 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-05-23 00:59:07.708471 | orchestrator | Friday 23 May 2025 00:58:57 +0000 (0:00:00.185) 0:00:18.404 ************ 2025-05-23 00:59:07.708482 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:59:07.708493 | orchestrator | 2025-05-23 00:59:07.708503 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-05-23 00:59:07.708514 | orchestrator | Friday 23 May 2025 00:58:57 +0000 (0:00:00.129) 0:00:18.533 ************ 2025-05-23 00:59:07.708524 | orchestrator | skipping: [testbed-node-0] 2025-05-23 00:59:07.708535 | orchestrator | 2025-05-23 00:59:07.708546 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-05-23 00:59:07.708556 | orchestrator | Friday 23 May 2025 00:58:57 +0000 (0:00:00.303) 0:00:18.836 ************ 2025-05-23 00:59:07.708567 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-23 00:59:07.708578 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-23 00:59:07.708588 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-23 00:59:07.708599 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-23 00:59:07.708609 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-23 00:59:07.708620 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-23 00:59:07.708630 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-23 00:59:07.708641 | orchestrator | 2025-05-23 00:59:07.708651 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-05-23 00:59:07.708662 | orchestrator | Friday 23 May 2025 00:58:58 +0000 (0:00:00.880) 0:00:19.717 ************ 2025-05-23 00:59:07.708673 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-23 00:59:07.708683 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-23 00:59:07.708694 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-23 00:59:07.708704 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-23 00:59:07.708715 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-23 00:59:07.708725 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-23 00:59:07.708736 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-23 00:59:07.708753 | orchestrator | 2025-05-23 00:59:07.708773 | orchestrator | TASK [ceph-fetch-keys : lookup keys in /etc/ceph] ****************************** 2025-05-23 00:59:07.708790 | orchestrator | Friday 23 May 2025 00:59:00 +0000 (0:00:01.552) 0:00:21.269 ************ 2025-05-23 00:59:07.708809 | orchestrator | ok: [testbed-node-0] 2025-05-23 00:59:07.708829 | orchestrator | 2025-05-23 00:59:07.708917 | orchestrator | TASK [ceph-fetch-keys : create a local fetch directory if it does not exist] *** 2025-05-23 00:59:07.708930 | orchestrator | Friday 23 May 2025 00:59:00 +0000 (0:00:00.465) 0:00:21.735 ************ 2025-05-23 00:59:07.708940 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-23 00:59:07.708950 | orchestrator | 2025-05-23 00:59:07.708959 | orchestrator | TASK [ceph-fetch-keys : copy ceph user and bootstrap keys to the ansible server in /share/11111111-1111-1111-1111-111111111111/] *** 2025-05-23 00:59:07.708970 | orchestrator | Friday 23 May 2025 00:59:01 +0000 (0:00:00.659) 0:00:22.394 ************ 2025-05-23 00:59:07.708986 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.admin.keyring) 2025-05-23 00:59:07.708995 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.cinder-backup.keyring) 2025-05-23 00:59:07.709005 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.cinder.keyring) 2025-05-23 00:59:07.709014 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.crash.keyring) 2025-05-23 00:59:07.709024 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.glance.keyring) 2025-05-23 00:59:07.709033 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.gnocchi.keyring) 2025-05-23 00:59:07.709042 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.manila.keyring) 2025-05-23 00:59:07.709052 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.nova.keyring) 2025-05-23 00:59:07.709061 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-0.keyring) 2025-05-23 00:59:07.709071 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-1.keyring) 2025-05-23 00:59:07.709080 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-2.keyring) 2025-05-23 00:59:07.709090 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mon.keyring) 2025-05-23 00:59:07.709099 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) 2025-05-23 00:59:07.709109 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) 2025-05-23 00:59:07.709118 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) 2025-05-23 00:59:07.709127 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) 2025-05-23 00:59:07.709137 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr/ceph.keyring) 2025-05-23 00:59:07.709146 | orchestrator | 2025-05-23 00:59:07.709155 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 00:59:07.709165 | orchestrator | testbed-node-0 : ok=28  changed=3  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-05-23 00:59:07.709176 | orchestrator | 2025-05-23 00:59:07.709185 | orchestrator | 2025-05-23 00:59:07.709194 | orchestrator | 2025-05-23 00:59:07.709204 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-23 00:59:07.709238 | orchestrator | Friday 23 May 2025 00:59:07 +0000 (0:00:06.219) 0:00:28.614 ************ 2025-05-23 00:59:07.709248 | orchestrator | =============================================================================== 2025-05-23 00:59:07.709258 | orchestrator | ceph-fetch-keys : copy ceph user and bootstrap keys to the ansible server in /share/11111111-1111-1111-1111-111111111111/ --- 6.22s 2025-05-23 00:59:07.709267 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 2.61s 2025-05-23 00:59:07.709277 | orchestrator | ceph-facts : find a running mon container ------------------------------- 1.99s 2025-05-23 00:59:07.709301 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 1.55s 2025-05-23 00:59:07.709311 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 1.03s 2025-05-23 00:59:07.709320 | orchestrator | ceph-facts : set_fact monitor_name ansible_facts['hostname'] ------------ 0.93s 2025-05-23 00:59:07.709330 | orchestrator | ceph-facts : check if the ceph mon socket is in-use --------------------- 0.92s 2025-05-23 00:59:07.709339 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 0.89s 2025-05-23 00:59:07.709349 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 0.88s 2025-05-23 00:59:07.709358 | orchestrator | ceph-fetch-keys : create a local fetch directory if it does not exist --- 0.66s 2025-05-23 00:59:07.709367 | orchestrator | ceph-facts : check if it is atomic host --------------------------------- 0.62s 2025-05-23 00:59:07.709377 | orchestrator | ceph-facts : check if the ceph conf exists ------------------------------ 0.49s 2025-05-23 00:59:07.709386 | orchestrator | ceph-facts : check for a ceph mon socket -------------------------------- 0.49s 2025-05-23 00:59:07.709395 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.48s 2025-05-23 00:59:07.709405 | orchestrator | ceph-fetch-keys : lookup keys in /etc/ceph ------------------------------ 0.47s 2025-05-23 00:59:07.709489 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 0.45s 2025-05-23 00:59:07.709508 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 0.43s 2025-05-23 00:59:07.709517 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6 --- 0.43s 2025-05-23 00:59:07.709527 | orchestrator | ceph-facts : set osd_pool_default_crush_rule fact ----------------------- 0.35s 2025-05-23 00:59:07.709536 | orchestrator | ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli --- 0.30s 2025-05-23 00:59:07.709546 | orchestrator | 2025-05-23 00:59:07 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 00:59:07.709557 | orchestrator | 2025-05-23 00:59:07 | INFO  | Task 1c5e4ace-9f6f-460b-ad08-e3b8c6281ab1 is in state STARTED 2025-05-23 00:59:07.709569 | orchestrator | 2025-05-23 00:59:07 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 00:59:07.709581 | orchestrator | 2025-05-23 00:59:07 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 00:59:07.709600 | orchestrator | 2025-05-23 00:59:07 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:59:10.762897 | orchestrator | 2025-05-23 00:59:10 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:59:10.763959 | orchestrator | 2025-05-23 00:59:10 | INFO  | Task c58bbfa3-78c1-4417-9f24-8ea71ef98d68 is in state STARTED 2025-05-23 00:59:10.765782 | orchestrator | 2025-05-23 00:59:10 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 00:59:10.767347 | orchestrator | 2025-05-23 00:59:10 | INFO  | Task 1c5e4ace-9f6f-460b-ad08-e3b8c6281ab1 is in state STARTED 2025-05-23 00:59:10.768348 | orchestrator | 2025-05-23 00:59:10 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 00:59:10.771108 | orchestrator | 2025-05-23 00:59:10 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 00:59:10.771135 | orchestrator | 2025-05-23 00:59:10 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:59:13.840441 | orchestrator | 2025-05-23 00:59:13 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:59:13.841518 | orchestrator | 2025-05-23 00:59:13 | INFO  | Task c58bbfa3-78c1-4417-9f24-8ea71ef98d68 is in state SUCCESS 2025-05-23 00:59:13.845713 | orchestrator | 2025-05-23 00:59:13 | INFO  | Task 7dad95c4-f6dc-4973-960f-7dff3ec40d5e is in state STARTED 2025-05-23 00:59:13.846808 | orchestrator | 2025-05-23 00:59:13 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 00:59:13.848090 | orchestrator | 2025-05-23 00:59:13 | INFO  | Task 1c5e4ace-9f6f-460b-ad08-e3b8c6281ab1 is in state STARTED 2025-05-23 00:59:13.849108 | orchestrator | 2025-05-23 00:59:13 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 00:59:13.850180 | orchestrator | 2025-05-23 00:59:13 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 00:59:13.850270 | orchestrator | 2025-05-23 00:59:13 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:59:16.887760 | orchestrator | 2025-05-23 00:59:16 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:59:16.889422 | orchestrator | 2025-05-23 00:59:16 | INFO  | Task 7dad95c4-f6dc-4973-960f-7dff3ec40d5e is in state STARTED 2025-05-23 00:59:16.891271 | orchestrator | 2025-05-23 00:59:16 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 00:59:16.894425 | orchestrator | 2025-05-23 00:59:16 | INFO  | Task 1c5e4ace-9f6f-460b-ad08-e3b8c6281ab1 is in state STARTED 2025-05-23 00:59:16.895843 | orchestrator | 2025-05-23 00:59:16 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 00:59:16.900558 | orchestrator | 2025-05-23 00:59:16 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 00:59:16.900631 | orchestrator | 2025-05-23 00:59:16 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:59:19.947514 | orchestrator | 2025-05-23 00:59:19 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:59:19.947618 | orchestrator | 2025-05-23 00:59:19 | INFO  | Task 7dad95c4-f6dc-4973-960f-7dff3ec40d5e is in state STARTED 2025-05-23 00:59:19.949944 | orchestrator | 2025-05-23 00:59:19 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 00:59:19.950806 | orchestrator | 2025-05-23 00:59:19 | INFO  | Task 1c5e4ace-9f6f-460b-ad08-e3b8c6281ab1 is in state STARTED 2025-05-23 00:59:19.951844 | orchestrator | 2025-05-23 00:59:19 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 00:59:19.953076 | orchestrator | 2025-05-23 00:59:19 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 00:59:19.953170 | orchestrator | 2025-05-23 00:59:19 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:59:22.999104 | orchestrator | 2025-05-23 00:59:22 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:59:22.999763 | orchestrator | 2025-05-23 00:59:22 | INFO  | Task 7dad95c4-f6dc-4973-960f-7dff3ec40d5e is in state STARTED 2025-05-23 00:59:22.999796 | orchestrator | 2025-05-23 00:59:22 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 00:59:23.001077 | orchestrator | 2025-05-23 00:59:23 | INFO  | Task 1c5e4ace-9f6f-460b-ad08-e3b8c6281ab1 is in state STARTED 2025-05-23 00:59:23.002690 | orchestrator | 2025-05-23 00:59:23 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 00:59:23.002782 | orchestrator | 2025-05-23 00:59:23 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 00:59:23.002797 | orchestrator | 2025-05-23 00:59:23 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:59:26.045693 | orchestrator | 2025-05-23 00:59:26 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:59:26.045931 | orchestrator | 2025-05-23 00:59:26 | INFO  | Task 7dad95c4-f6dc-4973-960f-7dff3ec40d5e is in state STARTED 2025-05-23 00:59:26.048902 | orchestrator | 2025-05-23 00:59:26 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 00:59:26.049027 | orchestrator | 2025-05-23 00:59:26 | INFO  | Task 1c5e4ace-9f6f-460b-ad08-e3b8c6281ab1 is in state STARTED 2025-05-23 00:59:26.050486 | orchestrator | 2025-05-23 00:59:26 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 00:59:26.051513 | orchestrator | 2025-05-23 00:59:26 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 00:59:26.051542 | orchestrator | 2025-05-23 00:59:26 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:59:29.096563 | orchestrator | 2025-05-23 00:59:29 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:59:29.097460 | orchestrator | 2025-05-23 00:59:29 | INFO  | Task 7dad95c4-f6dc-4973-960f-7dff3ec40d5e is in state STARTED 2025-05-23 00:59:29.099320 | orchestrator | 2025-05-23 00:59:29 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 00:59:29.101473 | orchestrator | 2025-05-23 00:59:29 | INFO  | Task 1c5e4ace-9f6f-460b-ad08-e3b8c6281ab1 is in state STARTED 2025-05-23 00:59:29.101847 | orchestrator | 2025-05-23 00:59:29 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 00:59:29.102381 | orchestrator | 2025-05-23 00:59:29 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 00:59:29.102406 | orchestrator | 2025-05-23 00:59:29 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:59:32.145635 | orchestrator | 2025-05-23 00:59:32 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:59:32.146152 | orchestrator | 2025-05-23 00:59:32 | INFO  | Task 7dad95c4-f6dc-4973-960f-7dff3ec40d5e is in state STARTED 2025-05-23 00:59:32.146908 | orchestrator | 2025-05-23 00:59:32 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 00:59:32.148292 | orchestrator | 2025-05-23 00:59:32 | INFO  | Task 1c5e4ace-9f6f-460b-ad08-e3b8c6281ab1 is in state STARTED 2025-05-23 00:59:32.149000 | orchestrator | 2025-05-23 00:59:32 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 00:59:32.149890 | orchestrator | 2025-05-23 00:59:32 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 00:59:32.149926 | orchestrator | 2025-05-23 00:59:32 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:59:35.188223 | orchestrator | 2025-05-23 00:59:35 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:59:35.188327 | orchestrator | 2025-05-23 00:59:35 | INFO  | Task 7dad95c4-f6dc-4973-960f-7dff3ec40d5e is in state STARTED 2025-05-23 00:59:35.188769 | orchestrator | 2025-05-23 00:59:35 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 00:59:35.190172 | orchestrator | 2025-05-23 00:59:35 | INFO  | Task 1c5e4ace-9f6f-460b-ad08-e3b8c6281ab1 is in state STARTED 2025-05-23 00:59:35.190647 | orchestrator | 2025-05-23 00:59:35 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 00:59:35.192521 | orchestrator | 2025-05-23 00:59:35 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 00:59:35.192547 | orchestrator | 2025-05-23 00:59:35 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:59:38.222709 | orchestrator | 2025-05-23 00:59:38 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:59:38.224372 | orchestrator | 2025-05-23 00:59:38 | INFO  | Task 7dad95c4-f6dc-4973-960f-7dff3ec40d5e is in state STARTED 2025-05-23 00:59:38.225064 | orchestrator | 2025-05-23 00:59:38 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 00:59:38.226640 | orchestrator | 2025-05-23 00:59:38 | INFO  | Task 1c5e4ace-9f6f-460b-ad08-e3b8c6281ab1 is in state STARTED 2025-05-23 00:59:38.227582 | orchestrator | 2025-05-23 00:59:38 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 00:59:38.228693 | orchestrator | 2025-05-23 00:59:38 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 00:59:38.229019 | orchestrator | 2025-05-23 00:59:38 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:59:41.255059 | orchestrator | 2025-05-23 00:59:41 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:59:41.255518 | orchestrator | 2025-05-23 00:59:41 | INFO  | Task 7dad95c4-f6dc-4973-960f-7dff3ec40d5e is in state STARTED 2025-05-23 00:59:41.256133 | orchestrator | 2025-05-23 00:59:41 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 00:59:41.256893 | orchestrator | 2025-05-23 00:59:41 | INFO  | Task 1c5e4ace-9f6f-460b-ad08-e3b8c6281ab1 is in state STARTED 2025-05-23 00:59:41.257589 | orchestrator | 2025-05-23 00:59:41 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 00:59:41.258245 | orchestrator | 2025-05-23 00:59:41 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 00:59:41.258272 | orchestrator | 2025-05-23 00:59:41 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:59:44.287733 | orchestrator | 2025-05-23 00:59:44 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:59:44.288508 | orchestrator | 2025-05-23 00:59:44 | INFO  | Task 7dad95c4-f6dc-4973-960f-7dff3ec40d5e is in state STARTED 2025-05-23 00:59:44.289495 | orchestrator | 2025-05-23 00:59:44 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 00:59:44.290347 | orchestrator | 2025-05-23 00:59:44 | INFO  | Task 1c5e4ace-9f6f-460b-ad08-e3b8c6281ab1 is in state STARTED 2025-05-23 00:59:44.291134 | orchestrator | 2025-05-23 00:59:44 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 00:59:44.291859 | orchestrator | 2025-05-23 00:59:44 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 00:59:44.292032 | orchestrator | 2025-05-23 00:59:44 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:59:47.332504 | orchestrator | 2025-05-23 00:59:47 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:59:47.333729 | orchestrator | 2025-05-23 00:59:47 | INFO  | Task 7dad95c4-f6dc-4973-960f-7dff3ec40d5e is in state STARTED 2025-05-23 00:59:47.336244 | orchestrator | 2025-05-23 00:59:47 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 00:59:47.336882 | orchestrator | 2025-05-23 00:59:47 | INFO  | Task 1c5e4ace-9f6f-460b-ad08-e3b8c6281ab1 is in state STARTED 2025-05-23 00:59:47.337743 | orchestrator | 2025-05-23 00:59:47 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 00:59:47.339344 | orchestrator | 2025-05-23 00:59:47 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 00:59:47.339370 | orchestrator | 2025-05-23 00:59:47 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:59:50.377617 | orchestrator | 2025-05-23 00:59:50 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:59:50.378063 | orchestrator | 2025-05-23 00:59:50 | INFO  | Task 7dad95c4-f6dc-4973-960f-7dff3ec40d5e is in state STARTED 2025-05-23 00:59:50.378805 | orchestrator | 2025-05-23 00:59:50 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 00:59:50.380209 | orchestrator | 2025-05-23 00:59:50 | INFO  | Task 1c5e4ace-9f6f-460b-ad08-e3b8c6281ab1 is in state STARTED 2025-05-23 00:59:50.380910 | orchestrator | 2025-05-23 00:59:50 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 00:59:50.382281 | orchestrator | 2025-05-23 00:59:50 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 00:59:50.382327 | orchestrator | 2025-05-23 00:59:50 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:59:53.414135 | orchestrator | 2025-05-23 00:59:53 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:59:53.417430 | orchestrator | 2025-05-23 00:59:53 | INFO  | Task 7dad95c4-f6dc-4973-960f-7dff3ec40d5e is in state STARTED 2025-05-23 00:59:53.417462 | orchestrator | 2025-05-23 00:59:53 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 00:59:53.418316 | orchestrator | 2025-05-23 00:59:53 | INFO  | Task 1c5e4ace-9f6f-460b-ad08-e3b8c6281ab1 is in state STARTED 2025-05-23 00:59:53.419572 | orchestrator | 2025-05-23 00:59:53 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 00:59:53.421340 | orchestrator | 2025-05-23 00:59:53 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 00:59:53.421363 | orchestrator | 2025-05-23 00:59:53 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:59:56.466696 | orchestrator | 2025-05-23 00:59:56 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:59:56.467131 | orchestrator | 2025-05-23 00:59:56 | INFO  | Task 7dad95c4-f6dc-4973-960f-7dff3ec40d5e is in state STARTED 2025-05-23 00:59:56.467218 | orchestrator | 2025-05-23 00:59:56 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 00:59:56.468003 | orchestrator | 2025-05-23 00:59:56 | INFO  | Task 1c5e4ace-9f6f-460b-ad08-e3b8c6281ab1 is in state STARTED 2025-05-23 00:59:56.468475 | orchestrator | 2025-05-23 00:59:56 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 00:59:56.469198 | orchestrator | 2025-05-23 00:59:56 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 00:59:56.469238 | orchestrator | 2025-05-23 00:59:56 | INFO  | Wait 1 second(s) until the next check 2025-05-23 00:59:59.503541 | orchestrator | 2025-05-23 00:59:59 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 00:59:59.504344 | orchestrator | 2025-05-23 00:59:59 | INFO  | Task 7dad95c4-f6dc-4973-960f-7dff3ec40d5e is in state STARTED 2025-05-23 00:59:59.507412 | orchestrator | 2025-05-23 00:59:59 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 00:59:59.507440 | orchestrator | 2025-05-23 00:59:59 | INFO  | Task 1c5e4ace-9f6f-460b-ad08-e3b8c6281ab1 is in state STARTED 2025-05-23 00:59:59.507453 | orchestrator | 2025-05-23 00:59:59 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 00:59:59.507465 | orchestrator | 2025-05-23 00:59:59 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 00:59:59.507476 | orchestrator | 2025-05-23 00:59:59 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:00:02.546631 | orchestrator | 2025-05-23 01:00:02 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:00:02.546690 | orchestrator | 2025-05-23 01:00:02 | INFO  | Task 7dad95c4-f6dc-4973-960f-7dff3ec40d5e is in state STARTED 2025-05-23 01:00:02.547433 | orchestrator | 2025-05-23 01:00:02 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:00:02.548467 | orchestrator | 2025-05-23 01:00:02 | INFO  | Task 1c5e4ace-9f6f-460b-ad08-e3b8c6281ab1 is in state STARTED 2025-05-23 01:00:02.548694 | orchestrator | 2025-05-23 01:00:02 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:00:02.549447 | orchestrator | 2025-05-23 01:00:02 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 01:00:02.549469 | orchestrator | 2025-05-23 01:00:02 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:00:05.582852 | orchestrator | 2025-05-23 01:00:05.582953 | orchestrator | 2025-05-23 01:00:05.582971 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-05-23 01:00:05.582984 | orchestrator | 2025-05-23 01:00:05.582995 | orchestrator | TASK [Check ceph keys] ********************************************************* 2025-05-23 01:00:05.583006 | orchestrator | Friday 23 May 2025 00:58:30 +0000 (0:00:00.148) 0:00:00.148 ************ 2025-05-23 01:00:05.583122 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-05-23 01:00:05.583134 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-23 01:00:05.583145 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-23 01:00:05.583181 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-05-23 01:00:05.583194 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-23 01:00:05.583205 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-05-23 01:00:05.583217 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-05-23 01:00:05.583232 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-05-23 01:00:05.583249 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-05-23 01:00:05.583260 | orchestrator | 2025-05-23 01:00:05.583271 | orchestrator | TASK [Set _fetch_ceph_keys fact] *********************************************** 2025-05-23 01:00:05.583282 | orchestrator | Friday 23 May 2025 00:58:33 +0000 (0:00:03.094) 0:00:03.242 ************ 2025-05-23 01:00:05.583293 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-05-23 01:00:05.583304 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-23 01:00:05.583315 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-23 01:00:05.583326 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-05-23 01:00:05.583336 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-23 01:00:05.583347 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-05-23 01:00:05.583358 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-05-23 01:00:05.583369 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-05-23 01:00:05.583380 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-05-23 01:00:05.583391 | orchestrator | 2025-05-23 01:00:05.583402 | orchestrator | TASK [Point out that the following task takes some time and does not give any output] *** 2025-05-23 01:00:05.583413 | orchestrator | Friday 23 May 2025 00:58:33 +0000 (0:00:00.258) 0:00:03.500 ************ 2025-05-23 01:00:05.583425 | orchestrator | ok: [testbed-manager] => { 2025-05-23 01:00:05.583450 | orchestrator |  "msg": "The task 'Fetch ceph keys from the first monitor node' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete." 2025-05-23 01:00:05.583468 | orchestrator | } 2025-05-23 01:00:05.583480 | orchestrator | 2025-05-23 01:00:05.583491 | orchestrator | TASK [Fetch ceph keys from the first monitor node] ***************************** 2025-05-23 01:00:05.583502 | orchestrator | Friday 23 May 2025 00:58:33 +0000 (0:00:00.174) 0:00:03.675 ************ 2025-05-23 01:00:05.583614 | orchestrator | changed: [testbed-manager] 2025-05-23 01:00:05.583629 | orchestrator | 2025-05-23 01:00:05.583640 | orchestrator | TASK [Copy ceph infrastructure keys to the configuration repository] *********** 2025-05-23 01:00:05.583651 | orchestrator | Friday 23 May 2025 00:59:08 +0000 (0:00:34.479) 0:00:38.154 ************ 2025-05-23 01:00:05.583663 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.admin.keyring', 'dest': '/opt/configuration/environments/infrastructure/files/ceph/ceph.client.admin.keyring'}) 2025-05-23 01:00:05.583674 | orchestrator | 2025-05-23 01:00:05.583685 | orchestrator | TASK [Copy ceph kolla keys to the configuration repository] ******************** 2025-05-23 01:00:05.583696 | orchestrator | Friday 23 May 2025 00:59:08 +0000 (0:00:00.447) 0:00:38.602 ************ 2025-05-23 01:00:05.583707 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume/ceph.client.cinder.keyring'}) 2025-05-23 01:00:05.583719 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder.keyring'}) 2025-05-23 01:00:05.583730 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder-backup.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder-backup.keyring'}) 2025-05-23 01:00:05.583741 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.cinder.keyring'}) 2025-05-23 01:00:05.583753 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.nova.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.nova.keyring'}) 2025-05-23 01:00:05.583783 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.glance.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/glance/ceph.client.glance.keyring'}) 2025-05-23 01:00:05.583795 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.gnocchi.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/gnocchi/ceph.client.gnocchi.keyring'}) 2025-05-23 01:00:05.583806 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.manila.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/manila/ceph.client.manila.keyring'}) 2025-05-23 01:00:05.583817 | orchestrator | 2025-05-23 01:00:05.583828 | orchestrator | TASK [Copy ceph custom keys to the configuration repository] ******************* 2025-05-23 01:00:05.583839 | orchestrator | Friday 23 May 2025 00:59:11 +0000 (0:00:02.890) 0:00:41.492 ************ 2025-05-23 01:00:05.583850 | orchestrator | skipping: [testbed-manager] 2025-05-23 01:00:05.583861 | orchestrator | 2025-05-23 01:00:05.583871 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 01:00:05.583883 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-23 01:00:05.583893 | orchestrator | 2025-05-23 01:00:05.583904 | orchestrator | Friday 23 May 2025 00:59:11 +0000 (0:00:00.036) 0:00:41.529 ************ 2025-05-23 01:00:05.583915 | orchestrator | =============================================================================== 2025-05-23 01:00:05.583926 | orchestrator | Fetch ceph keys from the first monitor node ---------------------------- 34.48s 2025-05-23 01:00:05.583936 | orchestrator | Check ceph keys --------------------------------------------------------- 3.09s 2025-05-23 01:00:05.583947 | orchestrator | Copy ceph kolla keys to the configuration repository -------------------- 2.89s 2025-05-23 01:00:05.583958 | orchestrator | Copy ceph infrastructure keys to the configuration repository ----------- 0.45s 2025-05-23 01:00:05.583969 | orchestrator | Set _fetch_ceph_keys fact ----------------------------------------------- 0.26s 2025-05-23 01:00:05.584063 | orchestrator | Point out that the following task takes some time and does not give any output --- 0.18s 2025-05-23 01:00:05.584084 | orchestrator | Copy ceph custom keys to the configuration repository ------------------- 0.04s 2025-05-23 01:00:05.584095 | orchestrator | 2025-05-23 01:00:05.584106 | orchestrator | 2025-05-23 01:00:05 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:00:05.584117 | orchestrator | 2025-05-23 01:00:05 | INFO  | Task def513a0-e5b1-44d6-a7d3-b3770860c8ce is in state STARTED 2025-05-23 01:00:05.584129 | orchestrator | 2025-05-23 01:00:05 | INFO  | Task 7dad95c4-f6dc-4973-960f-7dff3ec40d5e is in state SUCCESS 2025-05-23 01:00:05.584140 | orchestrator | 2025-05-23 01:00:05 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:00:05.584178 | orchestrator | 2025-05-23 01:00:05 | INFO  | Task 1c5e4ace-9f6f-460b-ad08-e3b8c6281ab1 is in state STARTED 2025-05-23 01:00:05.584199 | orchestrator | 2025-05-23 01:00:05 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:00:05.584228 | orchestrator | 2025-05-23 01:00:05 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 01:00:05.584246 | orchestrator | 2025-05-23 01:00:05 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:00:08.630435 | orchestrator | 2025-05-23 01:00:08 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:00:08.630507 | orchestrator | 2025-05-23 01:00:08 | INFO  | Task def513a0-e5b1-44d6-a7d3-b3770860c8ce is in state STARTED 2025-05-23 01:00:08.630520 | orchestrator | 2025-05-23 01:00:08 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:00:08.630944 | orchestrator | 2025-05-23 01:00:08 | INFO  | Task 1c5e4ace-9f6f-460b-ad08-e3b8c6281ab1 is in state STARTED 2025-05-23 01:00:08.631574 | orchestrator | 2025-05-23 01:00:08 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:00:08.631997 | orchestrator | 2025-05-23 01:00:08 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 01:00:08.632024 | orchestrator | 2025-05-23 01:00:08 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:00:11.665059 | orchestrator | 2025-05-23 01:00:11 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:00:11.665551 | orchestrator | 2025-05-23 01:00:11 | INFO  | Task def513a0-e5b1-44d6-a7d3-b3770860c8ce is in state STARTED 2025-05-23 01:00:11.665876 | orchestrator | 2025-05-23 01:00:11 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:00:11.666644 | orchestrator | 2025-05-23 01:00:11 | INFO  | Task 1c5e4ace-9f6f-460b-ad08-e3b8c6281ab1 is in state STARTED 2025-05-23 01:00:11.667308 | orchestrator | 2025-05-23 01:00:11 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:00:11.667758 | orchestrator | 2025-05-23 01:00:11 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 01:00:11.667782 | orchestrator | 2025-05-23 01:00:11 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:00:14.694485 | orchestrator | 2025-05-23 01:00:14 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:00:14.694589 | orchestrator | 2025-05-23 01:00:14 | INFO  | Task def513a0-e5b1-44d6-a7d3-b3770860c8ce is in state STARTED 2025-05-23 01:00:14.695267 | orchestrator | 2025-05-23 01:00:14 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:00:14.695521 | orchestrator | 2025-05-23 01:00:14 | INFO  | Task 1c5e4ace-9f6f-460b-ad08-e3b8c6281ab1 is in state STARTED 2025-05-23 01:00:14.696026 | orchestrator | 2025-05-23 01:00:14 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:00:14.696630 | orchestrator | 2025-05-23 01:00:14 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 01:00:14.697660 | orchestrator | 2025-05-23 01:00:14 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:00:17.726604 | orchestrator | 2025-05-23 01:00:17 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:00:17.726819 | orchestrator | 2025-05-23 01:00:17 | INFO  | Task def513a0-e5b1-44d6-a7d3-b3770860c8ce is in state STARTED 2025-05-23 01:00:17.727671 | orchestrator | 2025-05-23 01:00:17 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:00:17.728363 | orchestrator | 2025-05-23 01:00:17 | INFO  | Task 1c5e4ace-9f6f-460b-ad08-e3b8c6281ab1 is in state STARTED 2025-05-23 01:00:17.729015 | orchestrator | 2025-05-23 01:00:17 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:00:17.729546 | orchestrator | 2025-05-23 01:00:17 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 01:00:17.729644 | orchestrator | 2025-05-23 01:00:17 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:00:20.757862 | orchestrator | 2025-05-23 01:00:20 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:00:20.758410 | orchestrator | 2025-05-23 01:00:20 | INFO  | Task def513a0-e5b1-44d6-a7d3-b3770860c8ce is in state STARTED 2025-05-23 01:00:20.760556 | orchestrator | 2025-05-23 01:00:20 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:00:20.761423 | orchestrator | 2025-05-23 01:00:20 | INFO  | Task 1c5e4ace-9f6f-460b-ad08-e3b8c6281ab1 is in state STARTED 2025-05-23 01:00:20.762284 | orchestrator | 2025-05-23 01:00:20 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:00:20.763197 | orchestrator | 2025-05-23 01:00:20 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 01:00:20.764562 | orchestrator | 2025-05-23 01:00:20 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:00:23.791660 | orchestrator | 2025-05-23 01:00:23 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:00:23.793092 | orchestrator | 2025-05-23 01:00:23 | INFO  | Task def513a0-e5b1-44d6-a7d3-b3770860c8ce is in state STARTED 2025-05-23 01:00:23.794249 | orchestrator | 2025-05-23 01:00:23 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:00:23.796036 | orchestrator | 2025-05-23 01:00:23 | INFO  | Task 1c5e4ace-9f6f-460b-ad08-e3b8c6281ab1 is in state STARTED 2025-05-23 01:00:23.796948 | orchestrator | 2025-05-23 01:00:23 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:00:23.798236 | orchestrator | 2025-05-23 01:00:23 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 01:00:23.798264 | orchestrator | 2025-05-23 01:00:23 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:00:26.831393 | orchestrator | 2025-05-23 01:00:26 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:00:26.834341 | orchestrator | 2025-05-23 01:00:26 | INFO  | Task def513a0-e5b1-44d6-a7d3-b3770860c8ce is in state STARTED 2025-05-23 01:00:26.837591 | orchestrator | 2025-05-23 01:00:26 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:00:26.838731 | orchestrator | 2025-05-23 01:00:26 | INFO  | Task 1c5e4ace-9f6f-460b-ad08-e3b8c6281ab1 is in state STARTED 2025-05-23 01:00:26.841396 | orchestrator | 2025-05-23 01:00:26 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:00:26.843231 | orchestrator | 2025-05-23 01:00:26 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 01:00:26.843264 | orchestrator | 2025-05-23 01:00:26 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:00:29.885937 | orchestrator | 2025-05-23 01:00:29 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:00:29.886997 | orchestrator | 2025-05-23 01:00:29 | INFO  | Task def513a0-e5b1-44d6-a7d3-b3770860c8ce is in state STARTED 2025-05-23 01:00:29.887520 | orchestrator | 2025-05-23 01:00:29 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:00:29.888090 | orchestrator | 2025-05-23 01:00:29 | INFO  | Task 1c5e4ace-9f6f-460b-ad08-e3b8c6281ab1 is in state STARTED 2025-05-23 01:00:29.888784 | orchestrator | 2025-05-23 01:00:29 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:00:29.890659 | orchestrator | 2025-05-23 01:00:29 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 01:00:29.890684 | orchestrator | 2025-05-23 01:00:29 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:00:32.933506 | orchestrator | 2025-05-23 01:00:32 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:00:32.933598 | orchestrator | 2025-05-23 01:00:32 | INFO  | Task def513a0-e5b1-44d6-a7d3-b3770860c8ce is in state STARTED 2025-05-23 01:00:32.933744 | orchestrator | 2025-05-23 01:00:32 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:00:32.934467 | orchestrator | 2025-05-23 01:00:32 | INFO  | Task 1c5e4ace-9f6f-460b-ad08-e3b8c6281ab1 is in state STARTED 2025-05-23 01:00:32.935225 | orchestrator | 2025-05-23 01:00:32 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:00:32.935979 | orchestrator | 2025-05-23 01:00:32 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 01:00:32.936115 | orchestrator | 2025-05-23 01:00:32 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:00:35.975938 | orchestrator | 2025-05-23 01:00:35 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:00:35.976267 | orchestrator | 2025-05-23 01:00:35 | INFO  | Task def513a0-e5b1-44d6-a7d3-b3770860c8ce is in state STARTED 2025-05-23 01:00:35.978624 | orchestrator | 2025-05-23 01:00:35 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:00:35.979197 | orchestrator | 2025-05-23 01:00:35 | INFO  | Task 1c5e4ace-9f6f-460b-ad08-e3b8c6281ab1 is in state STARTED 2025-05-23 01:00:35.979775 | orchestrator | 2025-05-23 01:00:35 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:00:35.980323 | orchestrator | 2025-05-23 01:00:35 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 01:00:35.980355 | orchestrator | 2025-05-23 01:00:35 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:00:39.020331 | orchestrator | 2025-05-23 01:00:39 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:00:39.020655 | orchestrator | 2025-05-23 01:00:39 | INFO  | Task def513a0-e5b1-44d6-a7d3-b3770860c8ce is in state SUCCESS 2025-05-23 01:00:39.020685 | orchestrator | 2025-05-23 01:00:39.020699 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-05-23 01:00:39.020711 | orchestrator | 2025-05-23 01:00:39.020722 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-05-23 01:00:39.020734 | orchestrator | Friday 23 May 2025 00:59:14 +0000 (0:00:00.125) 0:00:00.125 ************ 2025-05-23 01:00:39.020745 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-05-23 01:00:39.020784 | orchestrator | 2025-05-23 01:00:39.020795 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-05-23 01:00:39.020806 | orchestrator | Friday 23 May 2025 00:59:14 +0000 (0:00:00.155) 0:00:00.280 ************ 2025-05-23 01:00:39.020818 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-05-23 01:00:39.020829 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-05-23 01:00:39.020840 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-05-23 01:00:39.020852 | orchestrator | 2025-05-23 01:00:39.020862 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-05-23 01:00:39.020873 | orchestrator | Friday 23 May 2025 00:59:15 +0000 (0:00:01.000) 0:00:01.280 ************ 2025-05-23 01:00:39.020884 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-05-23 01:00:39.020895 | orchestrator | 2025-05-23 01:00:39.020905 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-05-23 01:00:39.020916 | orchestrator | Friday 23 May 2025 00:59:16 +0000 (0:00:01.108) 0:00:02.389 ************ 2025-05-23 01:00:39.020927 | orchestrator | changed: [testbed-manager] 2025-05-23 01:00:39.020938 | orchestrator | 2025-05-23 01:00:39.020949 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-05-23 01:00:39.020959 | orchestrator | Friday 23 May 2025 00:59:17 +0000 (0:00:00.764) 0:00:03.153 ************ 2025-05-23 01:00:39.020970 | orchestrator | changed: [testbed-manager] 2025-05-23 01:00:39.020980 | orchestrator | 2025-05-23 01:00:39.020991 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-05-23 01:00:39.021002 | orchestrator | Friday 23 May 2025 00:59:18 +0000 (0:00:00.808) 0:00:03.962 ************ 2025-05-23 01:00:39.021012 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-05-23 01:00:39.021023 | orchestrator | ok: [testbed-manager] 2025-05-23 01:00:39.021034 | orchestrator | 2025-05-23 01:00:39.021045 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-05-23 01:00:39.021056 | orchestrator | Friday 23 May 2025 00:59:54 +0000 (0:00:35.960) 0:00:39.923 ************ 2025-05-23 01:00:39.021066 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-05-23 01:00:39.021078 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-05-23 01:00:39.021088 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-05-23 01:00:39.021099 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-05-23 01:00:39.021110 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-05-23 01:00:39.021121 | orchestrator | 2025-05-23 01:00:39.021162 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-05-23 01:00:39.021173 | orchestrator | Friday 23 May 2025 00:59:57 +0000 (0:00:03.471) 0:00:43.394 ************ 2025-05-23 01:00:39.021184 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-05-23 01:00:39.021195 | orchestrator | 2025-05-23 01:00:39.021205 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-05-23 01:00:39.021216 | orchestrator | Friday 23 May 2025 00:59:58 +0000 (0:00:00.491) 0:00:43.886 ************ 2025-05-23 01:00:39.021227 | orchestrator | skipping: [testbed-manager] 2025-05-23 01:00:39.021237 | orchestrator | 2025-05-23 01:00:39.021248 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-05-23 01:00:39.021259 | orchestrator | Friday 23 May 2025 00:59:58 +0000 (0:00:00.106) 0:00:43.993 ************ 2025-05-23 01:00:39.021270 | orchestrator | skipping: [testbed-manager] 2025-05-23 01:00:39.021283 | orchestrator | 2025-05-23 01:00:39.021296 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-05-23 01:00:39.021309 | orchestrator | Friday 23 May 2025 00:59:58 +0000 (0:00:00.273) 0:00:44.267 ************ 2025-05-23 01:00:39.021321 | orchestrator | changed: [testbed-manager] 2025-05-23 01:00:39.021342 | orchestrator | 2025-05-23 01:00:39.021461 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-05-23 01:00:39.021478 | orchestrator | Friday 23 May 2025 01:00:00 +0000 (0:00:01.435) 0:00:45.703 ************ 2025-05-23 01:00:39.021491 | orchestrator | changed: [testbed-manager] 2025-05-23 01:00:39.021505 | orchestrator | 2025-05-23 01:00:39.021518 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-05-23 01:00:39.021531 | orchestrator | Friday 23 May 2025 01:00:01 +0000 (0:00:00.786) 0:00:46.490 ************ 2025-05-23 01:00:39.021543 | orchestrator | changed: [testbed-manager] 2025-05-23 01:00:39.021555 | orchestrator | 2025-05-23 01:00:39.021568 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-05-23 01:00:39.021595 | orchestrator | Friday 23 May 2025 01:00:01 +0000 (0:00:00.548) 0:00:47.038 ************ 2025-05-23 01:00:39.021609 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-05-23 01:00:39.021622 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-05-23 01:00:39.021633 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-05-23 01:00:39.021644 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-05-23 01:00:39.021655 | orchestrator | 2025-05-23 01:00:39.021666 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 01:00:39.021692 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-23 01:00:39.021705 | orchestrator | 2025-05-23 01:00:39.021716 | orchestrator | Friday 23 May 2025 01:00:02 +0000 (0:00:01.165) 0:00:48.204 ************ 2025-05-23 01:00:39.021727 | orchestrator | =============================================================================== 2025-05-23 01:00:39.021738 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 35.96s 2025-05-23 01:00:39.021749 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.47s 2025-05-23 01:00:39.021760 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.44s 2025-05-23 01:00:39.021770 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.17s 2025-05-23 01:00:39.021781 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.11s 2025-05-23 01:00:39.021792 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.00s 2025-05-23 01:00:39.021803 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.81s 2025-05-23 01:00:39.021813 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.79s 2025-05-23 01:00:39.021824 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.76s 2025-05-23 01:00:39.021835 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.55s 2025-05-23 01:00:39.021846 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.49s 2025-05-23 01:00:39.021856 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.27s 2025-05-23 01:00:39.021867 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.16s 2025-05-23 01:00:39.021878 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.11s 2025-05-23 01:00:39.021889 | orchestrator | 2025-05-23 01:00:39.021900 | orchestrator | 2025-05-23 01:00:39 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:00:39.022498 | orchestrator | 2025-05-23 01:00:39 | INFO  | Task 1c5e4ace-9f6f-460b-ad08-e3b8c6281ab1 is in state STARTED 2025-05-23 01:00:39.023752 | orchestrator | 2025-05-23 01:00:39 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:00:39.024774 | orchestrator | 2025-05-23 01:00:39 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 01:00:39.024795 | orchestrator | 2025-05-23 01:00:39 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:00:42.057844 | orchestrator | 2025-05-23 01:00:42 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:00:42.059193 | orchestrator | 2025-05-23 01:00:42 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:00:42.059834 | orchestrator | 2025-05-23 01:00:42 | INFO  | Task 1c5e4ace-9f6f-460b-ad08-e3b8c6281ab1 is in state STARTED 2025-05-23 01:00:42.060398 | orchestrator | 2025-05-23 01:00:42 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:00:42.061140 | orchestrator | 2025-05-23 01:00:42 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 01:00:42.061165 | orchestrator | 2025-05-23 01:00:42 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:00:45.095572 | orchestrator | 2025-05-23 01:00:45 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:00:45.095746 | orchestrator | 2025-05-23 01:00:45 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:00:45.095999 | orchestrator | 2025-05-23 01:00:45 | INFO  | Task 1c5e4ace-9f6f-460b-ad08-e3b8c6281ab1 is in state STARTED 2025-05-23 01:00:45.097079 | orchestrator | 2025-05-23 01:00:45 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:00:45.099791 | orchestrator | 2025-05-23 01:00:45 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 01:00:45.099824 | orchestrator | 2025-05-23 01:00:45 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:00:48.131402 | orchestrator | 2025-05-23 01:00:48 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:00:48.131848 | orchestrator | 2025-05-23 01:00:48 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:00:48.132442 | orchestrator | 2025-05-23 01:00:48 | INFO  | Task 1c5e4ace-9f6f-460b-ad08-e3b8c6281ab1 is in state STARTED 2025-05-23 01:00:48.133029 | orchestrator | 2025-05-23 01:00:48 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:00:48.133573 | orchestrator | 2025-05-23 01:00:48 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 01:00:48.133672 | orchestrator | 2025-05-23 01:00:48 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:00:51.170265 | orchestrator | 2025-05-23 01:00:51 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:00:51.170368 | orchestrator | 2025-05-23 01:00:51 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:00:51.172456 | orchestrator | 2025-05-23 01:00:51 | INFO  | Task 1c5e4ace-9f6f-460b-ad08-e3b8c6281ab1 is in state STARTED 2025-05-23 01:00:51.172491 | orchestrator | 2025-05-23 01:00:51 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:00:51.172886 | orchestrator | 2025-05-23 01:00:51 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 01:00:51.172992 | orchestrator | 2025-05-23 01:00:51 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:00:54.213319 | orchestrator | 2025-05-23 01:00:54 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:00:54.214358 | orchestrator | 2025-05-23 01:00:54 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:00:54.215098 | orchestrator | 2025-05-23 01:00:54 | INFO  | Task 1c5e4ace-9f6f-460b-ad08-e3b8c6281ab1 is in state STARTED 2025-05-23 01:00:54.215788 | orchestrator | 2025-05-23 01:00:54 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:00:54.216638 | orchestrator | 2025-05-23 01:00:54 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 01:00:54.218316 | orchestrator | 2025-05-23 01:00:54 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:00:57.255859 | orchestrator | 2025-05-23 01:00:57 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:00:57.257535 | orchestrator | 2025-05-23 01:00:57 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:00:57.258189 | orchestrator | 2025-05-23 01:00:57 | INFO  | Task 1c5e4ace-9f6f-460b-ad08-e3b8c6281ab1 is in state STARTED 2025-05-23 01:00:57.258878 | orchestrator | 2025-05-23 01:00:57 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:00:57.259613 | orchestrator | 2025-05-23 01:00:57 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 01:00:57.259799 | orchestrator | 2025-05-23 01:00:57 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:01:00.301507 | orchestrator | 2025-05-23 01:01:00 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:01:00.301956 | orchestrator | 2025-05-23 01:01:00 | INFO  | Task b93ae559-39cf-40b6-a6ec-9625d1192cee is in state STARTED 2025-05-23 01:01:00.304231 | orchestrator | 2025-05-23 01:01:00 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:01:00.304987 | orchestrator | 2025-05-23 01:01:00 | INFO  | Task 1c5e4ace-9f6f-460b-ad08-e3b8c6281ab1 is in state SUCCESS 2025-05-23 01:01:00.305019 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-23 01:01:00.305034 | orchestrator | 2025-05-23 01:01:00.305049 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-05-23 01:01:00.305061 | orchestrator | 2025-05-23 01:01:00.305074 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-05-23 01:01:00.305087 | orchestrator | Friday 23 May 2025 01:00:05 +0000 (0:00:00.363) 0:00:00.363 ************ 2025-05-23 01:01:00.305100 | orchestrator | changed: [testbed-manager] 2025-05-23 01:01:00.305142 | orchestrator | 2025-05-23 01:01:00.305162 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-05-23 01:01:00.305182 | orchestrator | Friday 23 May 2025 01:00:07 +0000 (0:00:01.391) 0:00:01.755 ************ 2025-05-23 01:01:00.305201 | orchestrator | changed: [testbed-manager] 2025-05-23 01:01:00.305212 | orchestrator | 2025-05-23 01:01:00.305223 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-05-23 01:01:00.305234 | orchestrator | Friday 23 May 2025 01:00:08 +0000 (0:00:00.983) 0:00:02.739 ************ 2025-05-23 01:01:00.305245 | orchestrator | changed: [testbed-manager] 2025-05-23 01:01:00.305352 | orchestrator | 2025-05-23 01:01:00.305370 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-05-23 01:01:00.305382 | orchestrator | Friday 23 May 2025 01:00:09 +0000 (0:00:00.876) 0:00:03.615 ************ 2025-05-23 01:01:00.305393 | orchestrator | changed: [testbed-manager] 2025-05-23 01:01:00.305403 | orchestrator | 2025-05-23 01:01:00.305414 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-05-23 01:01:00.305425 | orchestrator | Friday 23 May 2025 01:00:10 +0000 (0:00:00.968) 0:00:04.583 ************ 2025-05-23 01:01:00.305436 | orchestrator | changed: [testbed-manager] 2025-05-23 01:01:00.305446 | orchestrator | 2025-05-23 01:01:00.305469 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-05-23 01:01:00.305480 | orchestrator | Friday 23 May 2025 01:00:11 +0000 (0:00:01.016) 0:00:05.600 ************ 2025-05-23 01:01:00.305491 | orchestrator | changed: [testbed-manager] 2025-05-23 01:01:00.305502 | orchestrator | 2025-05-23 01:01:00.305513 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-05-23 01:01:00.305523 | orchestrator | Friday 23 May 2025 01:00:11 +0000 (0:00:00.895) 0:00:06.495 ************ 2025-05-23 01:01:00.305644 | orchestrator | changed: [testbed-manager] 2025-05-23 01:01:00.305659 | orchestrator | 2025-05-23 01:01:00.305670 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-05-23 01:01:00.305680 | orchestrator | Friday 23 May 2025 01:00:13 +0000 (0:00:01.144) 0:00:07.639 ************ 2025-05-23 01:01:00.305691 | orchestrator | changed: [testbed-manager] 2025-05-23 01:01:00.305702 | orchestrator | 2025-05-23 01:01:00.305713 | orchestrator | TASK [Create admin user] ******************************************************* 2025-05-23 01:01:00.305753 | orchestrator | Friday 23 May 2025 01:00:14 +0000 (0:00:01.130) 0:00:08.770 ************ 2025-05-23 01:01:00.305766 | orchestrator | changed: [testbed-manager] 2025-05-23 01:01:00.305777 | orchestrator | 2025-05-23 01:01:00.305788 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-05-23 01:01:00.305798 | orchestrator | Friday 23 May 2025 01:00:31 +0000 (0:00:17.753) 0:00:26.523 ************ 2025-05-23 01:01:00.305809 | orchestrator | skipping: [testbed-manager] 2025-05-23 01:01:00.305820 | orchestrator | 2025-05-23 01:01:00.305908 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-23 01:01:00.305923 | orchestrator | 2025-05-23 01:01:00.305934 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-23 01:01:00.305945 | orchestrator | Friday 23 May 2025 01:00:32 +0000 (0:00:00.657) 0:00:27.181 ************ 2025-05-23 01:01:00.305956 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:01:00.305967 | orchestrator | 2025-05-23 01:01:00.306091 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-23 01:01:00.306134 | orchestrator | 2025-05-23 01:01:00.306155 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-23 01:01:00.306173 | orchestrator | Friday 23 May 2025 01:00:34 +0000 (0:00:01.975) 0:00:29.156 ************ 2025-05-23 01:01:00.306190 | orchestrator | changed: [testbed-node-1] 2025-05-23 01:01:00.306204 | orchestrator | 2025-05-23 01:01:00.306215 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-23 01:01:00.306226 | orchestrator | 2025-05-23 01:01:00.306237 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-23 01:01:00.306248 | orchestrator | Friday 23 May 2025 01:00:36 +0000 (0:00:01.773) 0:00:30.930 ************ 2025-05-23 01:01:00.306259 | orchestrator | changed: [testbed-node-2] 2025-05-23 01:01:00.306269 | orchestrator | 2025-05-23 01:01:00.306281 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 01:01:00.306292 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-23 01:01:00.306305 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 01:01:00.306316 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 01:01:00.306328 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 01:01:00.306339 | orchestrator | 2025-05-23 01:01:00.306350 | orchestrator | 2025-05-23 01:01:00.306361 | orchestrator | 2025-05-23 01:01:00.306372 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-23 01:01:00.306383 | orchestrator | Friday 23 May 2025 01:00:37 +0000 (0:00:01.450) 0:00:32.380 ************ 2025-05-23 01:01:00.306394 | orchestrator | =============================================================================== 2025-05-23 01:01:00.306419 | orchestrator | Create admin user ------------------------------------------------------ 17.75s 2025-05-23 01:01:00.306431 | orchestrator | Restart ceph manager service -------------------------------------------- 5.20s 2025-05-23 01:01:00.306442 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.39s 2025-05-23 01:01:00.306453 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.14s 2025-05-23 01:01:00.306475 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.13s 2025-05-23 01:01:00.306487 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.02s 2025-05-23 01:01:00.306498 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.98s 2025-05-23 01:01:00.306508 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 0.97s 2025-05-23 01:01:00.306519 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.90s 2025-05-23 01:01:00.306530 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.88s 2025-05-23 01:01:00.306541 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.66s 2025-05-23 01:01:00.306552 | orchestrator | 2025-05-23 01:01:00.307698 | orchestrator | 2025-05-23 01:01:00.307734 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-23 01:01:00.307745 | orchestrator | 2025-05-23 01:01:00.307756 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-23 01:01:00.307767 | orchestrator | Friday 23 May 2025 00:58:49 +0000 (0:00:00.244) 0:00:00.244 ************ 2025-05-23 01:01:00.307777 | orchestrator | ok: [testbed-node-0] 2025-05-23 01:01:00.307789 | orchestrator | ok: [testbed-node-1] 2025-05-23 01:01:00.307799 | orchestrator | ok: [testbed-node-2] 2025-05-23 01:01:00.307810 | orchestrator | 2025-05-23 01:01:00.307821 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-23 01:01:00.307837 | orchestrator | Friday 23 May 2025 00:58:49 +0000 (0:00:00.379) 0:00:00.624 ************ 2025-05-23 01:01:00.307849 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-05-23 01:01:00.307860 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-05-23 01:01:00.307944 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-05-23 01:01:00.307957 | orchestrator | 2025-05-23 01:01:00.307968 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-05-23 01:01:00.307979 | orchestrator | 2025-05-23 01:01:00.307989 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-23 01:01:00.308000 | orchestrator | Friday 23 May 2025 00:58:49 +0000 (0:00:00.313) 0:00:00.938 ************ 2025-05-23 01:01:00.308011 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 01:01:00.308022 | orchestrator | 2025-05-23 01:01:00.308033 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-05-23 01:01:00.308043 | orchestrator | Friday 23 May 2025 00:58:50 +0000 (0:00:00.751) 0:00:01.689 ************ 2025-05-23 01:01:00.308054 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-05-23 01:01:00.308065 | orchestrator | 2025-05-23 01:01:00.308075 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-05-23 01:01:00.308086 | orchestrator | Friday 23 May 2025 00:58:53 +0000 (0:00:03.458) 0:00:05.148 ************ 2025-05-23 01:01:00.308096 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-05-23 01:01:00.308107 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-05-23 01:01:00.308178 | orchestrator | 2025-05-23 01:01:00.308190 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-05-23 01:01:00.308201 | orchestrator | Friday 23 May 2025 00:59:00 +0000 (0:00:06.663) 0:00:11.811 ************ 2025-05-23 01:01:00.308211 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-23 01:01:00.308222 | orchestrator | 2025-05-23 01:01:00.308233 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-05-23 01:01:00.308243 | orchestrator | Friday 23 May 2025 00:59:04 +0000 (0:00:03.526) 0:00:15.338 ************ 2025-05-23 01:01:00.308254 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-23 01:01:00.308264 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-05-23 01:01:00.308290 | orchestrator | 2025-05-23 01:01:00.308301 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-05-23 01:01:00.308311 | orchestrator | Friday 23 May 2025 00:59:08 +0000 (0:00:04.034) 0:00:19.373 ************ 2025-05-23 01:01:00.308322 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-23 01:01:00.308333 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-05-23 01:01:00.308344 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-05-23 01:01:00.308355 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-05-23 01:01:00.308366 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-05-23 01:01:00.308376 | orchestrator | 2025-05-23 01:01:00.308387 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-05-23 01:01:00.308398 | orchestrator | Friday 23 May 2025 00:59:23 +0000 (0:00:15.807) 0:00:35.180 ************ 2025-05-23 01:01:00.308408 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-05-23 01:01:00.308419 | orchestrator | 2025-05-23 01:01:00.308430 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-05-23 01:01:00.308440 | orchestrator | Friday 23 May 2025 00:59:28 +0000 (0:00:04.363) 0:00:39.544 ************ 2025-05-23 01:01:00.308455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-23 01:01:00.308518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-23 01:01:00.308533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-23 01:01:00.308553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:00.308566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:00.308577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:00.308601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:00.308614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:00.308625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:00.308649 | orchestrator | 2025-05-23 01:01:00.308660 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-05-23 01:01:00.308671 | orchestrator | Friday 23 May 2025 00:59:30 +0000 (0:00:02.603) 0:00:42.147 ************ 2025-05-23 01:01:00.308682 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-05-23 01:01:00.308692 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-05-23 01:01:00.308701 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-05-23 01:01:00.308711 | orchestrator | 2025-05-23 01:01:00.308720 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-05-23 01:01:00.308730 | orchestrator | Friday 23 May 2025 00:59:33 +0000 (0:00:02.669) 0:00:44.816 ************ 2025-05-23 01:01:00.308739 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:01:00.308749 | orchestrator | 2025-05-23 01:01:00.308759 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-05-23 01:01:00.308768 | orchestrator | Friday 23 May 2025 00:59:33 +0000 (0:00:00.209) 0:00:45.026 ************ 2025-05-23 01:01:00.308778 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:01:00.308787 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:01:00.308796 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:01:00.308806 | orchestrator | 2025-05-23 01:01:00.308815 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-23 01:01:00.308824 | orchestrator | Friday 23 May 2025 00:59:34 +0000 (0:00:00.409) 0:00:45.435 ************ 2025-05-23 01:01:00.308834 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 01:01:00.308844 | orchestrator | 2025-05-23 01:01:00.308853 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-05-23 01:01:00.308862 | orchestrator | Friday 23 May 2025 00:59:34 +0000 (0:00:00.694) 0:00:46.130 ************ 2025-05-23 01:01:00.308873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-23 01:01:00.308893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-23 01:01:00.308910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-23 01:01:00.308921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:00.308932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:00.308942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:00.308961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:00.308972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:00.308987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:00.308997 | orchestrator | 2025-05-23 01:01:00.309007 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-05-23 01:01:00.309017 | orchestrator | Friday 23 May 2025 00:59:39 +0000 (0:00:04.141) 0:00:50.272 ************ 2025-05-23 01:01:00.309027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-23 01:01:00.309037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-23 01:01:00.309054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-23 01:01:00.309064 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:01:00.309078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-23 01:01:00.309094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-23 01:01:00.309104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-23 01:01:00.309134 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:01:00.309145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-23 01:01:00.309161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-23 01:01:00.309175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-23 01:01:00.309191 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:01:00.309201 | orchestrator | 2025-05-23 01:01:00.309211 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-05-23 01:01:00.309221 | orchestrator | Friday 23 May 2025 00:59:39 +0000 (0:00:00.606) 0:00:50.879 ************ 2025-05-23 01:01:00.309231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-23 01:01:00.309242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-23 01:01:00.309252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-23 01:01:00.309268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-23 01:01:00.309283 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:01:00.309297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-23 01:01:00.309308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-23 01:01:00.309318 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:01:00.309328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-23 01:01:00.309338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-23 01:01:00.309349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-23 01:01:00.309366 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:01:00.309376 | orchestrator | 2025-05-23 01:01:00.309386 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-05-23 01:01:00.309400 | orchestrator | Friday 23 May 2025 00:59:41 +0000 (0:00:02.172) 0:00:53.051 ************ 2025-05-23 01:01:00.309415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-23 01:01:00.309426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-23 01:01:00.309436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-23 01:01:00.309447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:00.309468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:00.309482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:00.309493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:00.309503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:00.309514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:00.309524 | orchestrator | 2025-05-23 01:01:00.309534 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-05-23 01:01:00.309544 | orchestrator | Friday 23 May 2025 00:59:46 +0000 (0:00:04.367) 0:00:57.419 ************ 2025-05-23 01:01:00.309553 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:01:00.309563 | orchestrator | changed: [testbed-node-1] 2025-05-23 01:01:00.309573 | orchestrator | changed: [testbed-node-2] 2025-05-23 01:01:00.309582 | orchestrator | 2025-05-23 01:01:00.309592 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-05-23 01:01:00.309609 | orchestrator | Friday 23 May 2025 00:59:48 +0000 (0:00:02.497) 0:00:59.916 ************ 2025-05-23 01:01:00.309619 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-23 01:01:00.309628 | orchestrator | 2025-05-23 01:01:00.309638 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-05-23 01:01:00.309648 | orchestrator | Friday 23 May 2025 00:59:50 +0000 (0:00:02.031) 0:01:01.948 ************ 2025-05-23 01:01:00.309657 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:01:00.309667 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:01:00.309676 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:01:00.309686 | orchestrator | 2025-05-23 01:01:00.309695 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-05-23 01:01:00.309705 | orchestrator | Friday 23 May 2025 00:59:51 +0000 (0:00:00.822) 0:01:02.771 ************ 2025-05-23 01:01:00.309725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-23 01:01:00.309737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-23 01:01:00.309747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-23 01:01:00.309758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:00.309773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:00.309792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:00.309803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:00.309813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:00.309823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:00.309833 | orchestrator | 2025-05-23 01:01:00.309843 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-05-23 01:01:00.309859 | orchestrator | Friday 23 May 2025 01:00:04 +0000 (0:00:13.000) 0:01:15.772 ************ 2025-05-23 01:01:00.309869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-23 01:01:00.309884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-23 01:01:00.309898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-23 01:01:00.309909 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:01:00.309919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-23 01:01:00.309929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-23 01:01:00.309945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-23 01:01:00.309955 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:01:00.309970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-23 01:01:00.309985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-23 01:01:00.309995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-23 01:01:00.310005 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:01:00.310015 | orchestrator | 2025-05-23 01:01:00.310074 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-05-23 01:01:00.310084 | orchestrator | Friday 23 May 2025 01:00:06 +0000 (0:00:01.573) 0:01:17.345 ************ 2025-05-23 01:01:00.310094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-23 01:01:00.310128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-23 01:01:00.310152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-23 01:01:00.310164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:00.310174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:00.310189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:00.310200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:00.310210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:00.310229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:00.310240 | orchestrator | 2025-05-23 01:01:00.310250 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-23 01:01:00.310260 | orchestrator | Friday 23 May 2025 01:00:09 +0000 (0:00:03.624) 0:01:20.969 ************ 2025-05-23 01:01:00.310270 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:01:00.310282 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:01:00.310301 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:01:00.310311 | orchestrator | 2025-05-23 01:01:00.310321 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-05-23 01:01:00.310331 | orchestrator | Friday 23 May 2025 01:00:10 +0000 (0:00:00.719) 0:01:21.689 ************ 2025-05-23 01:01:00.310342 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:01:00.310359 | orchestrator | 2025-05-23 01:01:00.310375 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-05-23 01:01:00.310392 | orchestrator | Friday 23 May 2025 01:00:13 +0000 (0:00:03.024) 0:01:24.714 ************ 2025-05-23 01:01:00.310408 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:01:00.310423 | orchestrator | 2025-05-23 01:01:00.310433 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-05-23 01:01:00.310442 | orchestrator | Friday 23 May 2025 01:00:15 +0000 (0:00:02.412) 0:01:27.126 ************ 2025-05-23 01:01:00.310451 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:01:00.310467 | orchestrator | 2025-05-23 01:01:00.310477 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-23 01:01:00.310486 | orchestrator | Friday 23 May 2025 01:00:26 +0000 (0:00:10.846) 0:01:37.973 ************ 2025-05-23 01:01:00.310495 | orchestrator | 2025-05-23 01:01:00.310505 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-23 01:01:00.310514 | orchestrator | Friday 23 May 2025 01:00:26 +0000 (0:00:00.054) 0:01:38.027 ************ 2025-05-23 01:01:00.310524 | orchestrator | 2025-05-23 01:01:00.310533 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-23 01:01:00.310543 | orchestrator | Friday 23 May 2025 01:00:26 +0000 (0:00:00.175) 0:01:38.203 ************ 2025-05-23 01:01:00.310552 | orchestrator | 2025-05-23 01:01:00.310562 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-05-23 01:01:00.310571 | orchestrator | Friday 23 May 2025 01:00:27 +0000 (0:00:00.057) 0:01:38.261 ************ 2025-05-23 01:01:00.310580 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:01:00.310590 | orchestrator | changed: [testbed-node-2] 2025-05-23 01:01:00.310599 | orchestrator | changed: [testbed-node-1] 2025-05-23 01:01:00.310609 | orchestrator | 2025-05-23 01:01:00.310618 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-05-23 01:01:00.310628 | orchestrator | Friday 23 May 2025 01:00:39 +0000 (0:00:12.405) 0:01:50.666 ************ 2025-05-23 01:01:00.310637 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:01:00.310647 | orchestrator | changed: [testbed-node-2] 2025-05-23 01:01:00.310656 | orchestrator | changed: [testbed-node-1] 2025-05-23 01:01:00.310665 | orchestrator | 2025-05-23 01:01:00.310675 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-05-23 01:01:00.310685 | orchestrator | Friday 23 May 2025 01:00:49 +0000 (0:00:10.086) 0:02:00.753 ************ 2025-05-23 01:01:00.310694 | orchestrator | changed: [testbed-node-2] 2025-05-23 01:01:00.310704 | orchestrator | changed: [testbed-node-1] 2025-05-23 01:01:00.310713 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:01:00.310722 | orchestrator | 2025-05-23 01:01:00.310732 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 01:01:00.310742 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-23 01:01:00.310751 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-23 01:01:00.310761 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-23 01:01:00.310771 | orchestrator | 2025-05-23 01:01:00.310780 | orchestrator | 2025-05-23 01:01:00.310790 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-23 01:01:00.310799 | orchestrator | Friday 23 May 2025 01:00:58 +0000 (0:00:09.305) 0:02:10.059 ************ 2025-05-23 01:01:00.310808 | orchestrator | =============================================================================== 2025-05-23 01:01:00.310818 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.81s 2025-05-23 01:01:00.310827 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 13.00s 2025-05-23 01:01:00.310836 | orchestrator | barbican : Restart barbican-api container ------------------------------ 12.41s 2025-05-23 01:01:00.310846 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 10.85s 2025-05-23 01:01:00.310855 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 10.09s 2025-05-23 01:01:00.310865 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 9.31s 2025-05-23 01:01:00.310874 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.66s 2025-05-23 01:01:00.310891 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.37s 2025-05-23 01:01:00.310916 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.36s 2025-05-23 01:01:00.310932 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 4.14s 2025-05-23 01:01:00.310948 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.03s 2025-05-23 01:01:00.310962 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.62s 2025-05-23 01:01:00.310972 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.53s 2025-05-23 01:01:00.310987 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.46s 2025-05-23 01:01:00.310996 | orchestrator | barbican : Creating barbican database ----------------------------------- 3.02s 2025-05-23 01:01:00.311006 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 2.67s 2025-05-23 01:01:00.311015 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.60s 2025-05-23 01:01:00.311025 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.50s 2025-05-23 01:01:00.311035 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.41s 2025-05-23 01:01:00.311044 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 2.17s 2025-05-23 01:01:00.311054 | orchestrator | 2025-05-23 01:01:00 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:01:00.311064 | orchestrator | 2025-05-23 01:01:00 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 01:01:00.311073 | orchestrator | 2025-05-23 01:01:00 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:01:03.346261 | orchestrator | 2025-05-23 01:01:03 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:01:03.346422 | orchestrator | 2025-05-23 01:01:03 | INFO  | Task b93ae559-39cf-40b6-a6ec-9625d1192cee is in state STARTED 2025-05-23 01:01:03.346959 | orchestrator | 2025-05-23 01:01:03 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:01:03.347553 | orchestrator | 2025-05-23 01:01:03 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:01:03.348160 | orchestrator | 2025-05-23 01:01:03 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 01:01:03.348200 | orchestrator | 2025-05-23 01:01:03 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:01:06.382289 | orchestrator | 2025-05-23 01:01:06 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:01:06.382386 | orchestrator | 2025-05-23 01:01:06 | INFO  | Task b93ae559-39cf-40b6-a6ec-9625d1192cee is in state STARTED 2025-05-23 01:01:06.383079 | orchestrator | 2025-05-23 01:01:06 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:01:06.383679 | orchestrator | 2025-05-23 01:01:06 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:01:06.384348 | orchestrator | 2025-05-23 01:01:06 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 01:01:06.384455 | orchestrator | 2025-05-23 01:01:06 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:01:09.410881 | orchestrator | 2025-05-23 01:01:09 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:01:09.410975 | orchestrator | 2025-05-23 01:01:09 | INFO  | Task b93ae559-39cf-40b6-a6ec-9625d1192cee is in state STARTED 2025-05-23 01:01:09.410990 | orchestrator | 2025-05-23 01:01:09 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:01:09.411142 | orchestrator | 2025-05-23 01:01:09 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:01:09.411949 | orchestrator | 2025-05-23 01:01:09 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 01:01:09.411995 | orchestrator | 2025-05-23 01:01:09 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:01:12.437626 | orchestrator | 2025-05-23 01:01:12 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:01:12.438077 | orchestrator | 2025-05-23 01:01:12 | INFO  | Task b93ae559-39cf-40b6-a6ec-9625d1192cee is in state STARTED 2025-05-23 01:01:12.438387 | orchestrator | 2025-05-23 01:01:12 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:01:12.438819 | orchestrator | 2025-05-23 01:01:12 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:01:12.439307 | orchestrator | 2025-05-23 01:01:12 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 01:01:12.439335 | orchestrator | 2025-05-23 01:01:12 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:01:15.462278 | orchestrator | 2025-05-23 01:01:15 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:01:15.462742 | orchestrator | 2025-05-23 01:01:15 | INFO  | Task b93ae559-39cf-40b6-a6ec-9625d1192cee is in state STARTED 2025-05-23 01:01:15.463617 | orchestrator | 2025-05-23 01:01:15 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:01:15.465028 | orchestrator | 2025-05-23 01:01:15 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:01:15.465394 | orchestrator | 2025-05-23 01:01:15 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 01:01:15.465643 | orchestrator | 2025-05-23 01:01:15 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:01:18.491725 | orchestrator | 2025-05-23 01:01:18 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:01:18.493811 | orchestrator | 2025-05-23 01:01:18 | INFO  | Task b93ae559-39cf-40b6-a6ec-9625d1192cee is in state STARTED 2025-05-23 01:01:18.494261 | orchestrator | 2025-05-23 01:01:18 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:01:18.494769 | orchestrator | 2025-05-23 01:01:18 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:01:18.495391 | orchestrator | 2025-05-23 01:01:18 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 01:01:18.495413 | orchestrator | 2025-05-23 01:01:18 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:01:21.534334 | orchestrator | 2025-05-23 01:01:21 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:01:21.534469 | orchestrator | 2025-05-23 01:01:21 | INFO  | Task b93ae559-39cf-40b6-a6ec-9625d1192cee is in state STARTED 2025-05-23 01:01:21.535037 | orchestrator | 2025-05-23 01:01:21 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:01:21.536721 | orchestrator | 2025-05-23 01:01:21 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:01:21.537567 | orchestrator | 2025-05-23 01:01:21 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 01:01:21.537651 | orchestrator | 2025-05-23 01:01:21 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:01:24.576184 | orchestrator | 2025-05-23 01:01:24 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:01:24.576271 | orchestrator | 2025-05-23 01:01:24 | INFO  | Task b93ae559-39cf-40b6-a6ec-9625d1192cee is in state STARTED 2025-05-23 01:01:24.576287 | orchestrator | 2025-05-23 01:01:24 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:01:24.576325 | orchestrator | 2025-05-23 01:01:24 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:01:24.576337 | orchestrator | 2025-05-23 01:01:24 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 01:01:24.576348 | orchestrator | 2025-05-23 01:01:24 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:01:27.599251 | orchestrator | 2025-05-23 01:01:27 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:01:27.599345 | orchestrator | 2025-05-23 01:01:27 | INFO  | Task b93ae559-39cf-40b6-a6ec-9625d1192cee is in state STARTED 2025-05-23 01:01:27.600376 | orchestrator | 2025-05-23 01:01:27 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:01:27.602349 | orchestrator | 2025-05-23 01:01:27 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:01:27.603575 | orchestrator | 2025-05-23 01:01:27 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 01:01:27.603608 | orchestrator | 2025-05-23 01:01:27 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:01:30.658767 | orchestrator | 2025-05-23 01:01:30 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:01:30.660830 | orchestrator | 2025-05-23 01:01:30 | INFO  | Task b93ae559-39cf-40b6-a6ec-9625d1192cee is in state STARTED 2025-05-23 01:01:30.662515 | orchestrator | 2025-05-23 01:01:30 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:01:30.665282 | orchestrator | 2025-05-23 01:01:30 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:01:30.667160 | orchestrator | 2025-05-23 01:01:30 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 01:01:30.667184 | orchestrator | 2025-05-23 01:01:30 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:01:33.737491 | orchestrator | 2025-05-23 01:01:33 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:01:33.737599 | orchestrator | 2025-05-23 01:01:33 | INFO  | Task b93ae559-39cf-40b6-a6ec-9625d1192cee is in state STARTED 2025-05-23 01:01:33.738169 | orchestrator | 2025-05-23 01:01:33 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:01:33.739421 | orchestrator | 2025-05-23 01:01:33 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:01:33.740501 | orchestrator | 2025-05-23 01:01:33 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 01:01:33.740607 | orchestrator | 2025-05-23 01:01:33 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:01:36.791492 | orchestrator | 2025-05-23 01:01:36 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:01:36.792766 | orchestrator | 2025-05-23 01:01:36 | INFO  | Task b93ae559-39cf-40b6-a6ec-9625d1192cee is in state STARTED 2025-05-23 01:01:36.794158 | orchestrator | 2025-05-23 01:01:36 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:01:36.796233 | orchestrator | 2025-05-23 01:01:36 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:01:36.797142 | orchestrator | 2025-05-23 01:01:36 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 01:01:36.797166 | orchestrator | 2025-05-23 01:01:36 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:01:39.854880 | orchestrator | 2025-05-23 01:01:39 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:01:39.856215 | orchestrator | 2025-05-23 01:01:39 | INFO  | Task b93ae559-39cf-40b6-a6ec-9625d1192cee is in state STARTED 2025-05-23 01:01:39.858299 | orchestrator | 2025-05-23 01:01:39 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:01:39.859036 | orchestrator | 2025-05-23 01:01:39 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:01:39.860536 | orchestrator | 2025-05-23 01:01:39 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state STARTED 2025-05-23 01:01:39.860650 | orchestrator | 2025-05-23 01:01:39 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:01:42.977475 | orchestrator | 2025-05-23 01:01:42 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:01:42.980080 | orchestrator | 2025-05-23 01:01:42 | INFO  | Task b93ae559-39cf-40b6-a6ec-9625d1192cee is in state STARTED 2025-05-23 01:01:42.982242 | orchestrator | 2025-05-23 01:01:42 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:01:42.984007 | orchestrator | 2025-05-23 01:01:42 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:01:42.987837 | orchestrator | 2025-05-23 01:01:42 | INFO  | Task 09a13020-2876-48b9-b8bf-d91b57f99e37 is in state SUCCESS 2025-05-23 01:01:42.989323 | orchestrator | 2025-05-23 01:01:42.989358 | orchestrator | 2025-05-23 01:01:42.989370 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-23 01:01:42.989382 | orchestrator | 2025-05-23 01:01:42.989393 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-23 01:01:42.989405 | orchestrator | Friday 23 May 2025 00:58:49 +0000 (0:00:00.324) 0:00:00.324 ************ 2025-05-23 01:01:42.989416 | orchestrator | ok: [testbed-node-0] 2025-05-23 01:01:42.989427 | orchestrator | ok: [testbed-node-1] 2025-05-23 01:01:42.989438 | orchestrator | ok: [testbed-node-2] 2025-05-23 01:01:42.989449 | orchestrator | 2025-05-23 01:01:42.989460 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-23 01:01:42.989470 | orchestrator | Friday 23 May 2025 00:58:49 +0000 (0:00:00.407) 0:00:00.732 ************ 2025-05-23 01:01:42.989482 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-05-23 01:01:42.989493 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-05-23 01:01:42.989504 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-05-23 01:01:42.989514 | orchestrator | 2025-05-23 01:01:42.989525 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-05-23 01:01:42.989536 | orchestrator | 2025-05-23 01:01:42.989547 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-23 01:01:42.989572 | orchestrator | Friday 23 May 2025 00:58:50 +0000 (0:00:00.279) 0:00:01.011 ************ 2025-05-23 01:01:42.989583 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 01:01:42.989595 | orchestrator | 2025-05-23 01:01:42.989606 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-05-23 01:01:42.989616 | orchestrator | Friday 23 May 2025 00:58:50 +0000 (0:00:00.566) 0:00:01.577 ************ 2025-05-23 01:01:42.989699 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-05-23 01:01:42.989713 | orchestrator | 2025-05-23 01:01:42.989724 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-05-23 01:01:42.989735 | orchestrator | Friday 23 May 2025 00:58:54 +0000 (0:00:03.495) 0:00:05.073 ************ 2025-05-23 01:01:42.989745 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-05-23 01:01:42.989756 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-05-23 01:01:42.989767 | orchestrator | 2025-05-23 01:01:42.989777 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-05-23 01:01:42.989830 | orchestrator | Friday 23 May 2025 00:59:00 +0000 (0:00:06.255) 0:00:11.328 ************ 2025-05-23 01:01:42.989858 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-05-23 01:01:42.989884 | orchestrator | 2025-05-23 01:01:42.989895 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-05-23 01:01:42.989906 | orchestrator | Friday 23 May 2025 00:59:03 +0000 (0:00:03.403) 0:00:14.732 ************ 2025-05-23 01:01:42.989917 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-23 01:01:42.990196 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-05-23 01:01:42.990216 | orchestrator | 2025-05-23 01:01:42.990229 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-05-23 01:01:42.990242 | orchestrator | Friday 23 May 2025 00:59:07 +0000 (0:00:03.851) 0:00:18.583 ************ 2025-05-23 01:01:42.990255 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-23 01:01:42.990267 | orchestrator | 2025-05-23 01:01:42.990280 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-05-23 01:01:42.990292 | orchestrator | Friday 23 May 2025 00:59:10 +0000 (0:00:03.074) 0:00:21.658 ************ 2025-05-23 01:01:42.990305 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-05-23 01:01:42.990316 | orchestrator | 2025-05-23 01:01:42.990327 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-05-23 01:01:42.990337 | orchestrator | Friday 23 May 2025 00:59:14 +0000 (0:00:04.008) 0:00:25.666 ************ 2025-05-23 01:01:42.990352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-23 01:01:42.990387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-23 01:01:42.990400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-23 01:01:42.990432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-23 01:01:42.990447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-23 01:01:42.990458 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-23 01:01:42.990470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.990490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.990502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.990521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.990538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.990550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.990561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.990573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.990590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.990603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.990620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.990637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.990648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.990660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.990677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.990689 | orchestrator | 2025-05-23 01:01:42.990700 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-05-23 01:01:42.990711 | orchestrator | Friday 23 May 2025 00:59:17 +0000 (0:00:02.943) 0:00:28.609 ************ 2025-05-23 01:01:42.990721 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:01:42.990732 | orchestrator | 2025-05-23 01:01:42.990743 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-05-23 01:01:42.990754 | orchestrator | Friday 23 May 2025 00:59:17 +0000 (0:00:00.101) 0:00:28.710 ************ 2025-05-23 01:01:42.990772 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:01:42.990783 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:01:42.990794 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:01:42.990805 | orchestrator | 2025-05-23 01:01:42.990815 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-23 01:01:42.990826 | orchestrator | Friday 23 May 2025 00:59:18 +0000 (0:00:00.363) 0:00:29.074 ************ 2025-05-23 01:01:42.990849 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 01:01:42.990860 | orchestrator | 2025-05-23 01:01:42.990871 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-05-23 01:01:42.990882 | orchestrator | Friday 23 May 2025 00:59:18 +0000 (0:00:00.472) 0:00:29.547 ************ 2025-05-23 01:01:42.990897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-23 01:01:42.991291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-23 01:01:42.991306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-23 01:01:42.991329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-23 01:01:42.991354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-23 01:01:42.991365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-23 01:01:42.991383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.991395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.991406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.991418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.991445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.991457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.991468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.991491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.991503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.991515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.991526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.991550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.991562 | orchestrator | 2025-05-23 01:01:42.991573 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-05-23 01:01:42.991584 | orchestrator | Friday 23 May 2025 00:59:24 +0000 (0:00:06.130) 0:00:35.677 ************ 2025-05-23 01:01:42.991596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-23 01:01:42.991612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-23 01:01:42.991624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.991635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.991653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.991670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.991682 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:01:42.991694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-23 01:01:42.991710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-23 01:01:42.991756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.991768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.991790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.991808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.991820 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:01:42.991831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-23 01:01:42.991847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-23 01:01:42.991859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.991870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.991888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.991908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.991919 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:01:42.991930 | orchestrator | 2025-05-23 01:01:42.991941 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-05-23 01:01:42.991952 | orchestrator | Friday 23 May 2025 00:59:28 +0000 (0:00:03.453) 0:00:39.131 ************ 2025-05-23 01:01:42.991964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-23 01:01:42.991980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-23 01:01:42.991991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.992008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.992020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.992038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.992050 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:01:42.992061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-23 01:01:42.992077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-23 01:01:42.992117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.992136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.992147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.992166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-23 01:01:42.992178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.992194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-23 01:01:42.992205 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:01:42.992217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.992236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.992247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.992265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.992277 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:01:42.992288 | orchestrator | 2025-05-23 01:01:42.992299 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-05-23 01:01:42.992310 | orchestrator | Friday 23 May 2025 00:59:30 +0000 (0:00:01.830) 0:00:40.961 ************ 2025-05-23 01:01:42.992321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-23 01:01:42.992337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-23 01:01:42.992358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-23 01:01:42.992369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-23 01:01:42.992387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-23 01:01:42.992399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-23 01:01:42.992411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.992426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.992444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.992456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.992467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.993062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.993122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.993137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.993165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.993177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.993189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.993200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.993251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.993264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.993280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.993298 | orchestrator | 2025-05-23 01:01:42.993309 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-05-23 01:01:42.993320 | orchestrator | Friday 23 May 2025 00:59:36 +0000 (0:00:06.537) 0:00:47.499 ************ 2025-05-23 01:01:42.993332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-23 01:01:42.993344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-23 01:01:42.993385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-23 01:01:42.993398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-23 01:01:42.993414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-23 01:01:42.993432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-23 01:01:42.993443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.993454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.993494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.993507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.993519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.993542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.993553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.993564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.993575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.993616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.993630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.993650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.993669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.993682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.993696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.993708 | orchestrator | 2025-05-23 01:01:42.993721 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-05-23 01:01:42.993733 | orchestrator | Friday 23 May 2025 00:59:56 +0000 (0:00:19.703) 0:01:07.202 ************ 2025-05-23 01:01:42.993745 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-23 01:01:42.993757 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-23 01:01:42.993770 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-23 01:01:42.993781 | orchestrator | 2025-05-23 01:01:42.993794 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-05-23 01:01:42.993807 | orchestrator | Friday 23 May 2025 01:00:06 +0000 (0:00:10.487) 0:01:17.690 ************ 2025-05-23 01:01:42.993819 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-23 01:01:42.993830 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-23 01:01:42.993843 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-23 01:01:42.993855 | orchestrator | 2025-05-23 01:01:42.993874 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-05-23 01:01:42.993885 | orchestrator | Friday 23 May 2025 01:00:11 +0000 (0:00:04.541) 0:01:22.231 ************ 2025-05-23 01:01:42.993897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-23 01:01:42.993919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-23 01:01:42.993931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-23 01:01:42.993943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-23 01:01:42.993954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.993972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.993994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.994010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-23 01:01:42.994058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.994158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.994179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.994201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-23 01:01:42.994222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.994233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.994250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.994261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.994272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.994283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.994308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.994320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.994331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.994342 | orchestrator | 2025-05-23 01:01:42.994357 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-05-23 01:01:42.994368 | orchestrator | Friday 23 May 2025 01:00:15 +0000 (0:00:04.029) 0:01:26.260 ************ 2025-05-23 01:01:42.994380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-23 01:01:42.994391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-23 01:01:42.994407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-23 01:01:42.994426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-23 01:01:42.994437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.994448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.994460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.994565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-23 01:01:42.994599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.994619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.994631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.994647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-23 01:01:42.994658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.994670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.994681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.994704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.994716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.994727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.994743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.994754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.994765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.994783 | orchestrator | 2025-05-23 01:01:42.994795 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-23 01:01:42.994805 | orchestrator | Friday 23 May 2025 01:00:18 +0000 (0:00:02.695) 0:01:28.956 ************ 2025-05-23 01:01:42.994816 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:01:42.994828 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:01:42.994838 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:01:42.994849 | orchestrator | 2025-05-23 01:01:42.994860 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-05-23 01:01:42.994871 | orchestrator | Friday 23 May 2025 01:00:18 +0000 (0:00:00.335) 0:01:29.291 ************ 2025-05-23 01:01:42.994888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-23 01:01:42.994900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-23 01:01:42.994912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.994928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.994939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.994957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.994968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.994979 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:01:42.994997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-23 01:01:42.995009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-23 01:01:42.995025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.995036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.995056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.995067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.995142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.995158 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:01:42.995170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-23 01:01:42.995187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-23 01:01:42.995198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.995217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.995228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.995246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.995258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.995269 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:01:42.995280 | orchestrator | 2025-05-23 01:01:42.995291 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-05-23 01:01:42.995302 | orchestrator | Friday 23 May 2025 01:00:19 +0000 (0:00:00.764) 0:01:30.056 ************ 2025-05-23 01:01:42.995317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-23 01:01:42.995333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-23 01:01:42.995344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-23 01:01:42.995360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-23 01:01:42.995370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-23 01:01:42.995387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-23 01:01:42.995398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.995413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.995473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.995486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.995502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.995512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.995527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.995544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.995554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.995564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.995578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.995589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.995599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.995614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-23 01:01:42.995630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-23 01:01:42.995640 | orchestrator | 2025-05-23 01:01:42.995650 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-23 01:01:42.995659 | orchestrator | Friday 23 May 2025 01:00:25 +0000 (0:00:06.040) 0:01:36.097 ************ 2025-05-23 01:01:42.995669 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:01:42.995678 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:01:42.995688 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:01:42.995697 | orchestrator | 2025-05-23 01:01:42.995707 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-05-23 01:01:42.995716 | orchestrator | Friday 23 May 2025 01:00:26 +0000 (0:00:00.834) 0:01:36.931 ************ 2025-05-23 01:01:42.995726 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-05-23 01:01:42.995735 | orchestrator | 2025-05-23 01:01:42.995745 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-05-23 01:01:42.995755 | orchestrator | Friday 23 May 2025 01:00:28 +0000 (0:00:02.267) 0:01:39.198 ************ 2025-05-23 01:01:42.995764 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-23 01:01:42.995774 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-05-23 01:01:42.995783 | orchestrator | 2025-05-23 01:01:42.995793 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-05-23 01:01:42.995802 | orchestrator | Friday 23 May 2025 01:00:30 +0000 (0:00:02.479) 0:01:41.678 ************ 2025-05-23 01:01:42.995812 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:01:42.995821 | orchestrator | 2025-05-23 01:01:42.995831 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-23 01:01:42.995840 | orchestrator | Friday 23 May 2025 01:00:45 +0000 (0:00:15.161) 0:01:56.840 ************ 2025-05-23 01:01:42.995850 | orchestrator | 2025-05-23 01:01:42.995859 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-23 01:01:42.995868 | orchestrator | Friday 23 May 2025 01:00:46 +0000 (0:00:00.096) 0:01:56.936 ************ 2025-05-23 01:01:42.995878 | orchestrator | 2025-05-23 01:01:42.995887 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-23 01:01:42.995897 | orchestrator | Friday 23 May 2025 01:00:46 +0000 (0:00:00.094) 0:01:57.031 ************ 2025-05-23 01:01:42.995906 | orchestrator | 2025-05-23 01:01:42.995916 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-05-23 01:01:42.995925 | orchestrator | Friday 23 May 2025 01:00:46 +0000 (0:00:00.093) 0:01:57.124 ************ 2025-05-23 01:01:42.995939 | orchestrator | changed: [testbed-node-1] 2025-05-23 01:01:42.995949 | orchestrator | changed: [testbed-node-2] 2025-05-23 01:01:42.995959 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:01:42.995968 | orchestrator | 2025-05-23 01:01:42.995978 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-05-23 01:01:42.995988 | orchestrator | Friday 23 May 2025 01:00:56 +0000 (0:00:09.974) 0:02:07.098 ************ 2025-05-23 01:01:42.996003 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:01:42.996013 | orchestrator | changed: [testbed-node-2] 2025-05-23 01:01:42.996022 | orchestrator | changed: [testbed-node-1] 2025-05-23 01:01:42.996032 | orchestrator | 2025-05-23 01:01:42.996041 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-05-23 01:01:42.996051 | orchestrator | Friday 23 May 2025 01:01:07 +0000 (0:00:11.054) 0:02:18.153 ************ 2025-05-23 01:01:42.996060 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:01:42.996069 | orchestrator | changed: [testbed-node-1] 2025-05-23 01:01:42.996079 | orchestrator | changed: [testbed-node-2] 2025-05-23 01:01:42.996112 | orchestrator | 2025-05-23 01:01:42.996122 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-05-23 01:01:42.996131 | orchestrator | Friday 23 May 2025 01:01:13 +0000 (0:00:06.384) 0:02:24.538 ************ 2025-05-23 01:01:42.996141 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:01:42.996150 | orchestrator | changed: [testbed-node-1] 2025-05-23 01:01:42.996160 | orchestrator | changed: [testbed-node-2] 2025-05-23 01:01:42.996169 | orchestrator | 2025-05-23 01:01:42.996179 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-05-23 01:01:42.996188 | orchestrator | Friday 23 May 2025 01:01:19 +0000 (0:00:06.347) 0:02:30.885 ************ 2025-05-23 01:01:42.996198 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:01:42.996207 | orchestrator | changed: [testbed-node-1] 2025-05-23 01:01:42.996216 | orchestrator | changed: [testbed-node-2] 2025-05-23 01:01:42.996226 | orchestrator | 2025-05-23 01:01:42.996235 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-05-23 01:01:42.996245 | orchestrator | Friday 23 May 2025 01:01:27 +0000 (0:00:07.222) 0:02:38.107 ************ 2025-05-23 01:01:42.996254 | orchestrator | changed: [testbed-node-2] 2025-05-23 01:01:42.996263 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:01:42.996273 | orchestrator | changed: [testbed-node-1] 2025-05-23 01:01:42.996282 | orchestrator | 2025-05-23 01:01:42.996292 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-05-23 01:01:42.996306 | orchestrator | Friday 23 May 2025 01:01:37 +0000 (0:00:10.375) 0:02:48.483 ************ 2025-05-23 01:01:42.996315 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:01:42.996325 | orchestrator | 2025-05-23 01:01:42.996335 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 01:01:42.996345 | orchestrator | testbed-node-0 : ok=29  changed=24  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-23 01:01:42.996355 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-23 01:01:42.996365 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-23 01:01:42.996375 | orchestrator | 2025-05-23 01:01:42.996384 | orchestrator | 2025-05-23 01:01:42.996394 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-23 01:01:42.996403 | orchestrator | Friday 23 May 2025 01:01:42 +0000 (0:00:04.640) 0:02:53.124 ************ 2025-05-23 01:01:42.996413 | orchestrator | =============================================================================== 2025-05-23 01:01:42.996422 | orchestrator | designate : Copying over designate.conf -------------------------------- 19.70s 2025-05-23 01:01:42.996431 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.16s 2025-05-23 01:01:42.996441 | orchestrator | designate : Restart designate-api container ---------------------------- 11.05s 2025-05-23 01:01:42.996450 | orchestrator | designate : Copying over pools.yaml ------------------------------------ 10.49s 2025-05-23 01:01:42.996460 | orchestrator | designate : Restart designate-worker container ------------------------- 10.38s 2025-05-23 01:01:42.996469 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 9.97s 2025-05-23 01:01:42.996478 | orchestrator | designate : Restart designate-mdns container ---------------------------- 7.22s 2025-05-23 01:01:42.996494 | orchestrator | designate : Copying over config.json files for services ----------------- 6.54s 2025-05-23 01:01:42.996504 | orchestrator | designate : Restart designate-central container ------------------------- 6.38s 2025-05-23 01:01:42.996513 | orchestrator | designate : Restart designate-producer container ------------------------ 6.35s 2025-05-23 01:01:42.996522 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.26s 2025-05-23 01:01:42.996532 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.13s 2025-05-23 01:01:42.996541 | orchestrator | designate : Check designate containers ---------------------------------- 6.04s 2025-05-23 01:01:42.996551 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 4.64s 2025-05-23 01:01:42.996560 | orchestrator | designate : Copying over named.conf ------------------------------------- 4.54s 2025-05-23 01:01:42.996570 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 4.03s 2025-05-23 01:01:42.996579 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.01s 2025-05-23 01:01:42.996589 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.85s 2025-05-23 01:01:42.996598 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.50s 2025-05-23 01:01:42.996608 | orchestrator | service-cert-copy : designate | Copying over backend internal TLS certificate --- 3.45s 2025-05-23 01:01:42.996623 | orchestrator | 2025-05-23 01:01:42 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:01:46.067510 | orchestrator | 2025-05-23 01:01:46 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:01:46.069132 | orchestrator | 2025-05-23 01:01:46 | INFO  | Task cdc384be-1e79-4943-a7d0-8ddfd67069aa is in state STARTED 2025-05-23 01:01:46.070544 | orchestrator | 2025-05-23 01:01:46 | INFO  | Task b93ae559-39cf-40b6-a6ec-9625d1192cee is in state STARTED 2025-05-23 01:01:46.072605 | orchestrator | 2025-05-23 01:01:46 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:01:46.073407 | orchestrator | 2025-05-23 01:01:46 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:01:46.073586 | orchestrator | 2025-05-23 01:01:46 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:01:49.130280 | orchestrator | 2025-05-23 01:01:49 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:01:49.130808 | orchestrator | 2025-05-23 01:01:49 | INFO  | Task cdc384be-1e79-4943-a7d0-8ddfd67069aa is in state STARTED 2025-05-23 01:01:49.133059 | orchestrator | 2025-05-23 01:01:49 | INFO  | Task b93ae559-39cf-40b6-a6ec-9625d1192cee is in state STARTED 2025-05-23 01:01:49.135791 | orchestrator | 2025-05-23 01:01:49 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:01:49.137802 | orchestrator | 2025-05-23 01:01:49 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:01:49.138077 | orchestrator | 2025-05-23 01:01:49 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:01:52.192417 | orchestrator | 2025-05-23 01:01:52 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:01:52.194783 | orchestrator | 2025-05-23 01:01:52 | INFO  | Task cdc384be-1e79-4943-a7d0-8ddfd67069aa is in state STARTED 2025-05-23 01:01:52.196541 | orchestrator | 2025-05-23 01:01:52 | INFO  | Task b93ae559-39cf-40b6-a6ec-9625d1192cee is in state STARTED 2025-05-23 01:01:52.198825 | orchestrator | 2025-05-23 01:01:52 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:01:52.200598 | orchestrator | 2025-05-23 01:01:52 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:01:52.200675 | orchestrator | 2025-05-23 01:01:52 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:01:55.244512 | orchestrator | 2025-05-23 01:01:55 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:01:55.245323 | orchestrator | 2025-05-23 01:01:55 | INFO  | Task cdc384be-1e79-4943-a7d0-8ddfd67069aa is in state STARTED 2025-05-23 01:01:55.247972 | orchestrator | 2025-05-23 01:01:55 | INFO  | Task b93ae559-39cf-40b6-a6ec-9625d1192cee is in state STARTED 2025-05-23 01:01:55.251528 | orchestrator | 2025-05-23 01:01:55 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:01:55.253582 | orchestrator | 2025-05-23 01:01:55 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:01:55.254276 | orchestrator | 2025-05-23 01:01:55 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:01:58.300661 | orchestrator | 2025-05-23 01:01:58 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:01:58.302686 | orchestrator | 2025-05-23 01:01:58 | INFO  | Task cdc384be-1e79-4943-a7d0-8ddfd67069aa is in state STARTED 2025-05-23 01:01:58.304464 | orchestrator | 2025-05-23 01:01:58 | INFO  | Task b93ae559-39cf-40b6-a6ec-9625d1192cee is in state STARTED 2025-05-23 01:01:58.305988 | orchestrator | 2025-05-23 01:01:58 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:01:58.307592 | orchestrator | 2025-05-23 01:01:58 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:01:58.307879 | orchestrator | 2025-05-23 01:01:58 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:02:01.354452 | orchestrator | 2025-05-23 01:02:01 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:02:01.355724 | orchestrator | 2025-05-23 01:02:01 | INFO  | Task cdc384be-1e79-4943-a7d0-8ddfd67069aa is in state STARTED 2025-05-23 01:02:01.357456 | orchestrator | 2025-05-23 01:02:01 | INFO  | Task b93ae559-39cf-40b6-a6ec-9625d1192cee is in state STARTED 2025-05-23 01:02:01.358412 | orchestrator | 2025-05-23 01:02:01 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:02:01.359497 | orchestrator | 2025-05-23 01:02:01 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:02:01.359524 | orchestrator | 2025-05-23 01:02:01 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:02:04.406278 | orchestrator | 2025-05-23 01:02:04 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:02:04.406737 | orchestrator | 2025-05-23 01:02:04 | INFO  | Task cdc384be-1e79-4943-a7d0-8ddfd67069aa is in state STARTED 2025-05-23 01:02:04.407989 | orchestrator | 2025-05-23 01:02:04 | INFO  | Task b93ae559-39cf-40b6-a6ec-9625d1192cee is in state STARTED 2025-05-23 01:02:04.408588 | orchestrator | 2025-05-23 01:02:04 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:02:04.409339 | orchestrator | 2025-05-23 01:02:04 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:02:04.409365 | orchestrator | 2025-05-23 01:02:04 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:02:07.467063 | orchestrator | 2025-05-23 01:02:07 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:02:07.468472 | orchestrator | 2025-05-23 01:02:07 | INFO  | Task cdc384be-1e79-4943-a7d0-8ddfd67069aa is in state STARTED 2025-05-23 01:02:07.470353 | orchestrator | 2025-05-23 01:02:07 | INFO  | Task b93ae559-39cf-40b6-a6ec-9625d1192cee is in state STARTED 2025-05-23 01:02:07.472883 | orchestrator | 2025-05-23 01:02:07 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:02:07.475233 | orchestrator | 2025-05-23 01:02:07 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:02:07.475269 | orchestrator | 2025-05-23 01:02:07 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:02:10.522756 | orchestrator | 2025-05-23 01:02:10 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:02:10.525457 | orchestrator | 2025-05-23 01:02:10 | INFO  | Task cdc384be-1e79-4943-a7d0-8ddfd67069aa is in state STARTED 2025-05-23 01:02:10.527467 | orchestrator | 2025-05-23 01:02:10 | INFO  | Task b93ae559-39cf-40b6-a6ec-9625d1192cee is in state STARTED 2025-05-23 01:02:10.530412 | orchestrator | 2025-05-23 01:02:10 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:02:10.533206 | orchestrator | 2025-05-23 01:02:10 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:02:10.533664 | orchestrator | 2025-05-23 01:02:10 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:02:13.586222 | orchestrator | 2025-05-23 01:02:13 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:02:13.588522 | orchestrator | 2025-05-23 01:02:13 | INFO  | Task cdc384be-1e79-4943-a7d0-8ddfd67069aa is in state STARTED 2025-05-23 01:02:13.592551 | orchestrator | 2025-05-23 01:02:13 | INFO  | Task b93ae559-39cf-40b6-a6ec-9625d1192cee is in state STARTED 2025-05-23 01:02:13.595144 | orchestrator | 2025-05-23 01:02:13 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:02:13.598062 | orchestrator | 2025-05-23 01:02:13 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:02:13.598703 | orchestrator | 2025-05-23 01:02:13 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:02:16.639046 | orchestrator | 2025-05-23 01:02:16 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:02:16.639193 | orchestrator | 2025-05-23 01:02:16 | INFO  | Task cdc384be-1e79-4943-a7d0-8ddfd67069aa is in state STARTED 2025-05-23 01:02:16.639940 | orchestrator | 2025-05-23 01:02:16 | INFO  | Task c170fe47-0bbb-411f-8d3a-f0642d2fc71d is in state STARTED 2025-05-23 01:02:16.641022 | orchestrator | 2025-05-23 01:02:16 | INFO  | Task b93ae559-39cf-40b6-a6ec-9625d1192cee is in state SUCCESS 2025-05-23 01:02:16.642988 | orchestrator | 2025-05-23 01:02:16.643031 | orchestrator | 2025-05-23 01:02:16.643043 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-23 01:02:16.643055 | orchestrator | 2025-05-23 01:02:16.643095 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-23 01:02:16.643109 | orchestrator | Friday 23 May 2025 01:01:03 +0000 (0:00:00.257) 0:00:00.257 ************ 2025-05-23 01:02:16.643120 | orchestrator | ok: [testbed-node-0] 2025-05-23 01:02:16.643132 | orchestrator | ok: [testbed-node-1] 2025-05-23 01:02:16.643143 | orchestrator | ok: [testbed-node-2] 2025-05-23 01:02:16.643154 | orchestrator | 2025-05-23 01:02:16.643165 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-23 01:02:16.643176 | orchestrator | Friday 23 May 2025 01:01:03 +0000 (0:00:00.383) 0:00:00.641 ************ 2025-05-23 01:02:16.643187 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-05-23 01:02:16.643198 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-05-23 01:02:16.643209 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-05-23 01:02:16.643220 | orchestrator | 2025-05-23 01:02:16.643230 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-05-23 01:02:16.643242 | orchestrator | 2025-05-23 01:02:16.643253 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-23 01:02:16.643289 | orchestrator | Friday 23 May 2025 01:01:03 +0000 (0:00:00.324) 0:00:00.966 ************ 2025-05-23 01:02:16.643301 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 01:02:16.643312 | orchestrator | 2025-05-23 01:02:16.643323 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-05-23 01:02:16.643334 | orchestrator | Friday 23 May 2025 01:01:05 +0000 (0:00:01.292) 0:00:02.258 ************ 2025-05-23 01:02:16.643344 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-05-23 01:02:16.643355 | orchestrator | 2025-05-23 01:02:16.643365 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-05-23 01:02:16.643375 | orchestrator | Friday 23 May 2025 01:01:08 +0000 (0:00:03.460) 0:00:05.719 ************ 2025-05-23 01:02:16.643386 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-05-23 01:02:16.643397 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-05-23 01:02:16.643415 | orchestrator | 2025-05-23 01:02:16.643434 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-05-23 01:02:16.643452 | orchestrator | Friday 23 May 2025 01:01:15 +0000 (0:00:06.443) 0:00:12.162 ************ 2025-05-23 01:02:16.643470 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-23 01:02:16.643491 | orchestrator | 2025-05-23 01:02:16.643510 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-05-23 01:02:16.643528 | orchestrator | Friday 23 May 2025 01:01:18 +0000 (0:00:03.524) 0:00:15.687 ************ 2025-05-23 01:02:16.643544 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-23 01:02:16.643554 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-05-23 01:02:16.643565 | orchestrator | 2025-05-23 01:02:16.643593 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-05-23 01:02:16.643606 | orchestrator | Friday 23 May 2025 01:01:22 +0000 (0:00:03.776) 0:00:19.463 ************ 2025-05-23 01:02:16.643619 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-23 01:02:16.643631 | orchestrator | 2025-05-23 01:02:16.643644 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-05-23 01:02:16.643656 | orchestrator | Friday 23 May 2025 01:01:25 +0000 (0:00:03.325) 0:00:22.788 ************ 2025-05-23 01:02:16.643668 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-05-23 01:02:16.643680 | orchestrator | 2025-05-23 01:02:16.643692 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-23 01:02:16.643704 | orchestrator | Friday 23 May 2025 01:01:29 +0000 (0:00:04.138) 0:00:26.927 ************ 2025-05-23 01:02:16.643716 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:02:16.643728 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:02:16.643741 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:02:16.643753 | orchestrator | 2025-05-23 01:02:16.643765 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-05-23 01:02:16.643777 | orchestrator | Friday 23 May 2025 01:01:30 +0000 (0:00:00.329) 0:00:27.257 ************ 2025-05-23 01:02:16.643795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-23 01:02:16.643838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-23 01:02:16.643853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-23 01:02:16.643866 | orchestrator | 2025-05-23 01:02:16.643879 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-05-23 01:02:16.643892 | orchestrator | Friday 23 May 2025 01:01:31 +0000 (0:00:00.977) 0:00:28.234 ************ 2025-05-23 01:02:16.643904 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:02:16.643916 | orchestrator | 2025-05-23 01:02:16.643928 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-05-23 01:02:16.643938 | orchestrator | Friday 23 May 2025 01:01:31 +0000 (0:00:00.141) 0:00:28.376 ************ 2025-05-23 01:02:16.643948 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:02:16.643959 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:02:16.643970 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:02:16.643980 | orchestrator | 2025-05-23 01:02:16.643996 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-23 01:02:16.644007 | orchestrator | Friday 23 May 2025 01:01:31 +0000 (0:00:00.297) 0:00:28.673 ************ 2025-05-23 01:02:16.644018 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 01:02:16.644029 | orchestrator | 2025-05-23 01:02:16.644039 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-05-23 01:02:16.644050 | orchestrator | Friday 23 May 2025 01:01:32 +0000 (0:00:00.853) 0:00:29.526 ************ 2025-05-23 01:02:16.644061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-23 01:02:16.644139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-23 01:02:16.644153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-23 01:02:16.644164 | orchestrator | 2025-05-23 01:02:16.644175 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-05-23 01:02:16.644186 | orchestrator | Friday 23 May 2025 01:01:34 +0000 (0:00:01.589) 0:00:31.115 ************ 2025-05-23 01:02:16.644203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-23 01:02:16.644214 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:02:16.644226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-23 01:02:16.644244 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:02:16.644263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-23 01:02:16.644275 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:02:16.644285 | orchestrator | 2025-05-23 01:02:16.644296 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-05-23 01:02:16.644307 | orchestrator | Friday 23 May 2025 01:01:34 +0000 (0:00:00.491) 0:00:31.607 ************ 2025-05-23 01:02:16.644318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-23 01:02:16.644329 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:02:16.644344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-23 01:02:16.644355 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:02:16.644366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-23 01:02:16.644383 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:02:16.644394 | orchestrator | 2025-05-23 01:02:16.644405 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-05-23 01:02:16.644416 | orchestrator | Friday 23 May 2025 01:01:35 +0000 (0:00:01.167) 0:00:32.774 ************ 2025-05-23 01:02:16.644435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-23 01:02:16.644447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-23 01:02:16.644460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-23 01:02:16.644481 | orchestrator | 2025-05-23 01:02:16.644500 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-05-23 01:02:16.644530 | orchestrator | Friday 23 May 2025 01:01:37 +0000 (0:00:01.746) 0:00:34.520 ************ 2025-05-23 01:02:16.644552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-23 01:02:16.644581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-23 01:02:16.644602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-23 01:02:16.644614 | orchestrator | 2025-05-23 01:02:16.644625 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-05-23 01:02:16.644636 | orchestrator | Friday 23 May 2025 01:01:39 +0000 (0:00:02.364) 0:00:36.885 ************ 2025-05-23 01:02:16.644646 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-23 01:02:16.644657 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-23 01:02:16.644668 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-23 01:02:16.644679 | orchestrator | 2025-05-23 01:02:16.644690 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-05-23 01:02:16.644701 | orchestrator | Friday 23 May 2025 01:01:41 +0000 (0:00:01.822) 0:00:38.707 ************ 2025-05-23 01:02:16.644712 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:02:16.644723 | orchestrator | changed: [testbed-node-1] 2025-05-23 01:02:16.644733 | orchestrator | changed: [testbed-node-2] 2025-05-23 01:02:16.644744 | orchestrator | 2025-05-23 01:02:16.644755 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-05-23 01:02:16.644766 | orchestrator | Friday 23 May 2025 01:01:43 +0000 (0:00:01.723) 0:00:40.430 ************ 2025-05-23 01:02:16.644782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-23 01:02:16.644800 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:02:16.644811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-23 01:02:16.644822 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:02:16.644841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-23 01:02:16.644852 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:02:16.644863 | orchestrator | 2025-05-23 01:02:16.644874 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-05-23 01:02:16.644885 | orchestrator | Friday 23 May 2025 01:01:44 +0000 (0:00:00.875) 0:00:41.306 ************ 2025-05-23 01:02:16.644896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-23 01:02:16.644924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-23 01:02:16.644936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-23 01:02:16.644948 | orchestrator | 2025-05-23 01:02:16.644959 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-05-23 01:02:16.644969 | orchestrator | Friday 23 May 2025 01:01:45 +0000 (0:00:01.353) 0:00:42.659 ************ 2025-05-23 01:02:16.644980 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:02:16.644991 | orchestrator | 2025-05-23 01:02:16.645001 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-05-23 01:02:16.645012 | orchestrator | Friday 23 May 2025 01:01:48 +0000 (0:00:02.471) 0:00:45.131 ************ 2025-05-23 01:02:16.645023 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:02:16.645033 | orchestrator | 2025-05-23 01:02:16.645044 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-05-23 01:02:16.645055 | orchestrator | Friday 23 May 2025 01:01:50 +0000 (0:00:02.304) 0:00:47.435 ************ 2025-05-23 01:02:16.645095 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:02:16.645107 | orchestrator | 2025-05-23 01:02:16.645119 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-23 01:02:16.645129 | orchestrator | Friday 23 May 2025 01:02:03 +0000 (0:00:12.665) 0:01:00.100 ************ 2025-05-23 01:02:16.645140 | orchestrator | 2025-05-23 01:02:16.645150 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-23 01:02:16.645161 | orchestrator | Friday 23 May 2025 01:02:03 +0000 (0:00:00.057) 0:01:00.158 ************ 2025-05-23 01:02:16.645172 | orchestrator | 2025-05-23 01:02:16.645182 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-23 01:02:16.645193 | orchestrator | Friday 23 May 2025 01:02:03 +0000 (0:00:00.214) 0:01:00.373 ************ 2025-05-23 01:02:16.645204 | orchestrator | 2025-05-23 01:02:16.645214 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-05-23 01:02:16.645225 | orchestrator | Friday 23 May 2025 01:02:03 +0000 (0:00:00.059) 0:01:00.432 ************ 2025-05-23 01:02:16.645235 | orchestrator | changed: [testbed-node-2] 2025-05-23 01:02:16.645246 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:02:16.645257 | orchestrator | changed: [testbed-node-1] 2025-05-23 01:02:16.645274 | orchestrator | 2025-05-23 01:02:16.645285 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 01:02:16.645297 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-23 01:02:16.645309 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-23 01:02:16.645319 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-23 01:02:16.645330 | orchestrator | 2025-05-23 01:02:16.645341 | orchestrator | 2025-05-23 01:02:16.645351 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-23 01:02:16.645362 | orchestrator | Friday 23 May 2025 01:02:13 +0000 (0:00:10.272) 0:01:10.705 ************ 2025-05-23 01:02:16.645373 | orchestrator | =============================================================================== 2025-05-23 01:02:16.645383 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.67s 2025-05-23 01:02:16.645394 | orchestrator | placement : Restart placement-api container ---------------------------- 10.27s 2025-05-23 01:02:16.645404 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.44s 2025-05-23 01:02:16.645415 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.14s 2025-05-23 01:02:16.645425 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.78s 2025-05-23 01:02:16.645436 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.52s 2025-05-23 01:02:16.645446 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.46s 2025-05-23 01:02:16.645457 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.33s 2025-05-23 01:02:16.645472 | orchestrator | placement : Creating placement databases -------------------------------- 2.47s 2025-05-23 01:02:16.645483 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.36s 2025-05-23 01:02:16.645496 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.30s 2025-05-23 01:02:16.645515 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.82s 2025-05-23 01:02:16.645534 | orchestrator | placement : Copying over config.json files for services ----------------- 1.75s 2025-05-23 01:02:16.645555 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.72s 2025-05-23 01:02:16.645575 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.59s 2025-05-23 01:02:16.645595 | orchestrator | placement : Check placement containers ---------------------------------- 1.35s 2025-05-23 01:02:16.645612 | orchestrator | placement : include_tasks ----------------------------------------------- 1.29s 2025-05-23 01:02:16.645623 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 1.17s 2025-05-23 01:02:16.645634 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.98s 2025-05-23 01:02:16.645644 | orchestrator | placement : Copying over existing policy file --------------------------- 0.88s 2025-05-23 01:02:16.645655 | orchestrator | 2025-05-23 01:02:16 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:02:16.645666 | orchestrator | 2025-05-23 01:02:16 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:02:16.645677 | orchestrator | 2025-05-23 01:02:16 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:02:19.679448 | orchestrator | 2025-05-23 01:02:19 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:02:19.679660 | orchestrator | 2025-05-23 01:02:19 | INFO  | Task cdc384be-1e79-4943-a7d0-8ddfd67069aa is in state STARTED 2025-05-23 01:02:19.680187 | orchestrator | 2025-05-23 01:02:19 | INFO  | Task c170fe47-0bbb-411f-8d3a-f0642d2fc71d is in state SUCCESS 2025-05-23 01:02:19.680927 | orchestrator | 2025-05-23 01:02:19 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:02:19.681381 | orchestrator | 2025-05-23 01:02:19 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:02:19.681403 | orchestrator | 2025-05-23 01:02:19 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:02:22.729917 | orchestrator | 2025-05-23 01:02:22 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:02:22.730106 | orchestrator | 2025-05-23 01:02:22 | INFO  | Task cdc384be-1e79-4943-a7d0-8ddfd67069aa is in state STARTED 2025-05-23 01:02:22.730608 | orchestrator | 2025-05-23 01:02:22 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:02:22.731350 | orchestrator | 2025-05-23 01:02:22 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:02:22.731913 | orchestrator | 2025-05-23 01:02:22 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:02:22.731936 | orchestrator | 2025-05-23 01:02:22 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:02:25.773992 | orchestrator | 2025-05-23 01:02:25 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:02:25.774821 | orchestrator | 2025-05-23 01:02:25 | INFO  | Task cdc384be-1e79-4943-a7d0-8ddfd67069aa is in state STARTED 2025-05-23 01:02:25.776545 | orchestrator | 2025-05-23 01:02:25 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:02:25.777726 | orchestrator | 2025-05-23 01:02:25 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:02:25.779634 | orchestrator | 2025-05-23 01:02:25 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:02:25.779670 | orchestrator | 2025-05-23 01:02:25 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:02:28.820722 | orchestrator | 2025-05-23 01:02:28 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:02:28.820829 | orchestrator | 2025-05-23 01:02:28 | INFO  | Task cdc384be-1e79-4943-a7d0-8ddfd67069aa is in state STARTED 2025-05-23 01:02:28.821522 | orchestrator | 2025-05-23 01:02:28 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:02:28.822351 | orchestrator | 2025-05-23 01:02:28 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:02:28.823278 | orchestrator | 2025-05-23 01:02:28 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:02:28.823306 | orchestrator | 2025-05-23 01:02:28 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:02:31.875838 | orchestrator | 2025-05-23 01:02:31 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:02:31.878737 | orchestrator | 2025-05-23 01:02:31 | INFO  | Task cdc384be-1e79-4943-a7d0-8ddfd67069aa is in state STARTED 2025-05-23 01:02:31.880335 | orchestrator | 2025-05-23 01:02:31 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:02:31.882134 | orchestrator | 2025-05-23 01:02:31 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:02:31.883816 | orchestrator | 2025-05-23 01:02:31 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:02:31.884025 | orchestrator | 2025-05-23 01:02:31 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:02:34.955687 | orchestrator | 2025-05-23 01:02:34 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:02:34.955819 | orchestrator | 2025-05-23 01:02:34 | INFO  | Task cdc384be-1e79-4943-a7d0-8ddfd67069aa is in state STARTED 2025-05-23 01:02:34.957544 | orchestrator | 2025-05-23 01:02:34 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:02:34.958997 | orchestrator | 2025-05-23 01:02:34 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:02:34.960468 | orchestrator | 2025-05-23 01:02:34 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:02:34.960577 | orchestrator | 2025-05-23 01:02:34 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:02:38.029271 | orchestrator | 2025-05-23 01:02:38 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:02:38.031534 | orchestrator | 2025-05-23 01:02:38 | INFO  | Task cdc384be-1e79-4943-a7d0-8ddfd67069aa is in state STARTED 2025-05-23 01:02:38.031788 | orchestrator | 2025-05-23 01:02:38 | INFO  | Task 90294022-a11e-415c-82f2-c02a72a5d709 is in state STARTED 2025-05-23 01:02:38.033322 | orchestrator | 2025-05-23 01:02:38 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:02:38.037146 | orchestrator | 2025-05-23 01:02:38 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:02:38.037885 | orchestrator | 2025-05-23 01:02:38 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:02:38.038112 | orchestrator | 2025-05-23 01:02:38 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:02:41.107775 | orchestrator | 2025-05-23 01:02:41 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:02:41.107896 | orchestrator | 2025-05-23 01:02:41 | INFO  | Task cdc384be-1e79-4943-a7d0-8ddfd67069aa is in state STARTED 2025-05-23 01:02:41.112366 | orchestrator | 2025-05-23 01:02:41 | INFO  | Task 90294022-a11e-415c-82f2-c02a72a5d709 is in state STARTED 2025-05-23 01:02:41.113312 | orchestrator | 2025-05-23 01:02:41 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:02:41.116971 | orchestrator | 2025-05-23 01:02:41 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:02:41.117873 | orchestrator | 2025-05-23 01:02:41 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:02:41.117908 | orchestrator | 2025-05-23 01:02:41 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:02:44.151222 | orchestrator | 2025-05-23 01:02:44 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:02:44.151408 | orchestrator | 2025-05-23 01:02:44 | INFO  | Task cdc384be-1e79-4943-a7d0-8ddfd67069aa is in state STARTED 2025-05-23 01:02:44.152118 | orchestrator | 2025-05-23 01:02:44 | INFO  | Task 90294022-a11e-415c-82f2-c02a72a5d709 is in state STARTED 2025-05-23 01:02:44.152720 | orchestrator | 2025-05-23 01:02:44 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:02:44.153683 | orchestrator | 2025-05-23 01:02:44 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:02:44.156650 | orchestrator | 2025-05-23 01:02:44 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:02:44.156687 | orchestrator | 2025-05-23 01:02:44 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:02:47.182659 | orchestrator | 2025-05-23 01:02:47 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:02:47.182883 | orchestrator | 2025-05-23 01:02:47 | INFO  | Task cdc384be-1e79-4943-a7d0-8ddfd67069aa is in state STARTED 2025-05-23 01:02:47.183655 | orchestrator | 2025-05-23 01:02:47 | INFO  | Task 90294022-a11e-415c-82f2-c02a72a5d709 is in state STARTED 2025-05-23 01:02:47.184194 | orchestrator | 2025-05-23 01:02:47 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:02:47.184848 | orchestrator | 2025-05-23 01:02:47 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:02:47.185485 | orchestrator | 2025-05-23 01:02:47 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:02:47.185507 | orchestrator | 2025-05-23 01:02:47 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:02:50.216999 | orchestrator | 2025-05-23 01:02:50 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:02:50.217220 | orchestrator | 2025-05-23 01:02:50 | INFO  | Task cdc384be-1e79-4943-a7d0-8ddfd67069aa is in state STARTED 2025-05-23 01:02:50.217412 | orchestrator | 2025-05-23 01:02:50 | INFO  | Task 90294022-a11e-415c-82f2-c02a72a5d709 is in state SUCCESS 2025-05-23 01:02:50.220615 | orchestrator | 2025-05-23 01:02:50 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:02:50.221426 | orchestrator | 2025-05-23 01:02:50 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:02:50.226833 | orchestrator | 2025-05-23 01:02:50 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:02:50.226882 | orchestrator | 2025-05-23 01:02:50 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:02:53.270716 | orchestrator | 2025-05-23 01:02:53 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:02:53.270990 | orchestrator | 2025-05-23 01:02:53 | INFO  | Task cdc384be-1e79-4943-a7d0-8ddfd67069aa is in state STARTED 2025-05-23 01:02:53.271630 | orchestrator | 2025-05-23 01:02:53 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:02:53.274984 | orchestrator | 2025-05-23 01:02:53 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:02:53.275516 | orchestrator | 2025-05-23 01:02:53 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:02:53.278411 | orchestrator | 2025-05-23 01:02:53 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:02:56.321480 | orchestrator | 2025-05-23 01:02:56 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:02:56.327109 | orchestrator | 2025-05-23 01:02:56 | INFO  | Task cdc384be-1e79-4943-a7d0-8ddfd67069aa is in state STARTED 2025-05-23 01:02:56.327736 | orchestrator | 2025-05-23 01:02:56 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:02:56.330116 | orchestrator | 2025-05-23 01:02:56 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:02:56.331795 | orchestrator | 2025-05-23 01:02:56 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:02:56.331805 | orchestrator | 2025-05-23 01:02:56 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:02:59.382323 | orchestrator | 2025-05-23 01:02:59 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:02:59.382461 | orchestrator | 2025-05-23 01:02:59 | INFO  | Task cdc384be-1e79-4943-a7d0-8ddfd67069aa is in state STARTED 2025-05-23 01:02:59.383598 | orchestrator | 2025-05-23 01:02:59 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:02:59.385894 | orchestrator | 2025-05-23 01:02:59 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:02:59.386806 | orchestrator | 2025-05-23 01:02:59 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:02:59.386845 | orchestrator | 2025-05-23 01:02:59 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:03:02.441971 | orchestrator | 2025-05-23 01:03:02 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:03:02.442785 | orchestrator | 2025-05-23 01:03:02 | INFO  | Task cdc384be-1e79-4943-a7d0-8ddfd67069aa is in state STARTED 2025-05-23 01:03:02.444010 | orchestrator | 2025-05-23 01:03:02 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:03:02.445003 | orchestrator | 2025-05-23 01:03:02 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:03:02.445731 | orchestrator | 2025-05-23 01:03:02 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:03:02.445821 | orchestrator | 2025-05-23 01:03:02 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:03:05.496769 | orchestrator | 2025-05-23 01:03:05 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:03:05.498469 | orchestrator | 2025-05-23 01:03:05 | INFO  | Task cdc384be-1e79-4943-a7d0-8ddfd67069aa is in state STARTED 2025-05-23 01:03:05.499485 | orchestrator | 2025-05-23 01:03:05 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:03:05.499515 | orchestrator | 2025-05-23 01:03:05 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:03:05.500289 | orchestrator | 2025-05-23 01:03:05 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:03:05.500312 | orchestrator | 2025-05-23 01:03:05 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:03:08.533190 | orchestrator | 2025-05-23 01:03:08 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:03:08.534192 | orchestrator | 2025-05-23 01:03:08 | INFO  | Task cdc384be-1e79-4943-a7d0-8ddfd67069aa is in state STARTED 2025-05-23 01:03:08.534704 | orchestrator | 2025-05-23 01:03:08 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:03:08.535499 | orchestrator | 2025-05-23 01:03:08 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:03:08.536925 | orchestrator | 2025-05-23 01:03:08 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:03:08.536960 | orchestrator | 2025-05-23 01:03:08 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:03:11.583553 | orchestrator | 2025-05-23 01:03:11 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:03:11.584108 | orchestrator | 2025-05-23 01:03:11 | INFO  | Task cdc384be-1e79-4943-a7d0-8ddfd67069aa is in state STARTED 2025-05-23 01:03:11.584957 | orchestrator | 2025-05-23 01:03:11 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:03:11.585995 | orchestrator | 2025-05-23 01:03:11 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:03:11.586702 | orchestrator | 2025-05-23 01:03:11 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:03:11.586727 | orchestrator | 2025-05-23 01:03:11 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:03:14.633073 | orchestrator | 2025-05-23 01:03:14 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:03:14.633160 | orchestrator | 2025-05-23 01:03:14 | INFO  | Task cdc384be-1e79-4943-a7d0-8ddfd67069aa is in state STARTED 2025-05-23 01:03:14.633969 | orchestrator | 2025-05-23 01:03:14 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:03:14.634601 | orchestrator | 2025-05-23 01:03:14 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:03:14.635219 | orchestrator | 2025-05-23 01:03:14 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:03:14.635247 | orchestrator | 2025-05-23 01:03:14 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:03:17.686774 | orchestrator | 2025-05-23 01:03:17 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:03:17.687211 | orchestrator | 2025-05-23 01:03:17 | INFO  | Task cdc384be-1e79-4943-a7d0-8ddfd67069aa is in state STARTED 2025-05-23 01:03:17.687815 | orchestrator | 2025-05-23 01:03:17 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:03:17.688698 | orchestrator | 2025-05-23 01:03:17 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:03:17.689225 | orchestrator | 2025-05-23 01:03:17 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:03:17.689250 | orchestrator | 2025-05-23 01:03:17 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:03:20.735549 | orchestrator | 2025-05-23 01:03:20 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:03:20.738954 | orchestrator | 2025-05-23 01:03:20 | INFO  | Task cdc384be-1e79-4943-a7d0-8ddfd67069aa is in state STARTED 2025-05-23 01:03:20.740523 | orchestrator | 2025-05-23 01:03:20 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:03:20.741519 | orchestrator | 2025-05-23 01:03:20 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:03:20.742649 | orchestrator | 2025-05-23 01:03:20 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:03:20.742679 | orchestrator | 2025-05-23 01:03:20 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:03:23.784589 | orchestrator | 2025-05-23 01:03:23 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:03:23.785270 | orchestrator | 2025-05-23 01:03:23 | INFO  | Task cdc384be-1e79-4943-a7d0-8ddfd67069aa is in state STARTED 2025-05-23 01:03:23.786339 | orchestrator | 2025-05-23 01:03:23 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:03:23.787341 | orchestrator | 2025-05-23 01:03:23 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:03:23.788917 | orchestrator | 2025-05-23 01:03:23 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:03:23.788955 | orchestrator | 2025-05-23 01:03:23 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:03:26.841446 | orchestrator | 2025-05-23 01:03:26 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:03:26.843792 | orchestrator | 2025-05-23 01:03:26 | INFO  | Task cdc384be-1e79-4943-a7d0-8ddfd67069aa is in state STARTED 2025-05-23 01:03:26.846951 | orchestrator | 2025-05-23 01:03:26 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:03:26.849296 | orchestrator | 2025-05-23 01:03:26 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:03:26.851637 | orchestrator | 2025-05-23 01:03:26 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:03:26.852128 | orchestrator | 2025-05-23 01:03:26 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:03:29.903950 | orchestrator | 2025-05-23 01:03:29 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:03:29.907421 | orchestrator | 2025-05-23 01:03:29 | INFO  | Task cdc384be-1e79-4943-a7d0-8ddfd67069aa is in state STARTED 2025-05-23 01:03:29.908632 | orchestrator | 2025-05-23 01:03:29 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:03:29.909756 | orchestrator | 2025-05-23 01:03:29 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:03:29.911498 | orchestrator | 2025-05-23 01:03:29 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:03:29.911532 | orchestrator | 2025-05-23 01:03:29 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:03:32.966688 | orchestrator | 2025-05-23 01:03:32 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:03:32.966796 | orchestrator | 2025-05-23 01:03:32 | INFO  | Task cdc384be-1e79-4943-a7d0-8ddfd67069aa is in state STARTED 2025-05-23 01:03:32.966810 | orchestrator | 2025-05-23 01:03:32 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:03:32.966822 | orchestrator | 2025-05-23 01:03:32 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:03:32.966832 | orchestrator | 2025-05-23 01:03:32 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:03:32.966844 | orchestrator | 2025-05-23 01:03:32 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:03:36.015670 | orchestrator | 2025-05-23 01:03:36 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:03:36.015883 | orchestrator | 2025-05-23 01:03:36 | INFO  | Task cdc384be-1e79-4943-a7d0-8ddfd67069aa is in state STARTED 2025-05-23 01:03:36.016642 | orchestrator | 2025-05-23 01:03:36 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:03:36.018549 | orchestrator | 2025-05-23 01:03:36 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state STARTED 2025-05-23 01:03:36.018596 | orchestrator | 2025-05-23 01:03:36 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:03:36.018609 | orchestrator | 2025-05-23 01:03:36 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:03:39.070926 | orchestrator | 2025-05-23 01:03:39 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:03:39.073492 | orchestrator | 2025-05-23 01:03:39 | INFO  | Task cdc384be-1e79-4943-a7d0-8ddfd67069aa is in state STARTED 2025-05-23 01:03:39.075767 | orchestrator | 2025-05-23 01:03:39 | INFO  | Task cbbd0018-4e6f-4571-8175-0b4ae02c1e46 is in state STARTED 2025-05-23 01:03:39.078121 | orchestrator | 2025-05-23 01:03:39 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:03:39.091130 | orchestrator | 2025-05-23 01:03:39 | INFO  | Task 30a07b35-9bed-4513-9934-c44c963a145d is in state SUCCESS 2025-05-23 01:03:39.091321 | orchestrator | 2025-05-23 01:03:39.091342 | orchestrator | 2025-05-23 01:03:39.091354 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-23 01:03:39.091367 | orchestrator | 2025-05-23 01:03:39.091378 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-23 01:03:39.091390 | orchestrator | Friday 23 May 2025 01:02:17 +0000 (0:00:00.244) 0:00:00.244 ************ 2025-05-23 01:03:39.091401 | orchestrator | ok: [testbed-node-0] 2025-05-23 01:03:39.091413 | orchestrator | ok: [testbed-node-1] 2025-05-23 01:03:39.091424 | orchestrator | ok: [testbed-node-2] 2025-05-23 01:03:39.091435 | orchestrator | 2025-05-23 01:03:39.091445 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-23 01:03:39.091456 | orchestrator | Friday 23 May 2025 01:02:17 +0000 (0:00:00.454) 0:00:00.698 ************ 2025-05-23 01:03:39.091495 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-05-23 01:03:39.091507 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-05-23 01:03:39.091518 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-05-23 01:03:39.091528 | orchestrator | 2025-05-23 01:03:39.091539 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-05-23 01:03:39.091550 | orchestrator | 2025-05-23 01:03:39.091560 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-05-23 01:03:39.091571 | orchestrator | Friday 23 May 2025 01:02:18 +0000 (0:00:00.489) 0:00:01.188 ************ 2025-05-23 01:03:39.091582 | orchestrator | ok: [testbed-node-0] 2025-05-23 01:03:39.091593 | orchestrator | ok: [testbed-node-1] 2025-05-23 01:03:39.091603 | orchestrator | ok: [testbed-node-2] 2025-05-23 01:03:39.091613 | orchestrator | 2025-05-23 01:03:39.091624 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 01:03:39.091636 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 01:03:39.091648 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 01:03:39.091659 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 01:03:39.091670 | orchestrator | 2025-05-23 01:03:39.091681 | orchestrator | 2025-05-23 01:03:39.091692 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-23 01:03:39.091702 | orchestrator | Friday 23 May 2025 01:02:19 +0000 (0:00:00.922) 0:00:02.111 ************ 2025-05-23 01:03:39.091713 | orchestrator | =============================================================================== 2025-05-23 01:03:39.091724 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.92s 2025-05-23 01:03:39.091734 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.49s 2025-05-23 01:03:39.091745 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.45s 2025-05-23 01:03:39.091756 | orchestrator | 2025-05-23 01:03:39.091767 | orchestrator | None 2025-05-23 01:03:39.093337 | orchestrator | 2025-05-23 01:03:39.093370 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-23 01:03:39.093382 | orchestrator | 2025-05-23 01:03:39.093393 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-23 01:03:39.093404 | orchestrator | Friday 23 May 2025 00:58:48 +0000 (0:00:00.266) 0:00:00.266 ************ 2025-05-23 01:03:39.093415 | orchestrator | ok: [testbed-node-0] 2025-05-23 01:03:39.093427 | orchestrator | ok: [testbed-node-1] 2025-05-23 01:03:39.093437 | orchestrator | ok: [testbed-node-2] 2025-05-23 01:03:39.093448 | orchestrator | ok: [testbed-node-3] 2025-05-23 01:03:39.093458 | orchestrator | ok: [testbed-node-4] 2025-05-23 01:03:39.093469 | orchestrator | ok: [testbed-node-5] 2025-05-23 01:03:39.093479 | orchestrator | 2025-05-23 01:03:39.093490 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-23 01:03:39.093501 | orchestrator | Friday 23 May 2025 00:58:49 +0000 (0:00:00.885) 0:00:01.151 ************ 2025-05-23 01:03:39.093512 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-05-23 01:03:39.093523 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-05-23 01:03:39.093576 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-05-23 01:03:39.093588 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-05-23 01:03:39.093599 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-05-23 01:03:39.093609 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-05-23 01:03:39.093619 | orchestrator | 2025-05-23 01:03:39.093630 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-05-23 01:03:39.093641 | orchestrator | 2025-05-23 01:03:39.093652 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-23 01:03:39.093677 | orchestrator | Friday 23 May 2025 00:58:50 +0000 (0:00:00.711) 0:00:01.862 ************ 2025-05-23 01:03:39.093688 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 01:03:39.093700 | orchestrator | 2025-05-23 01:03:39.093711 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-05-23 01:03:39.093721 | orchestrator | Friday 23 May 2025 00:58:51 +0000 (0:00:01.143) 0:00:03.006 ************ 2025-05-23 01:03:39.093732 | orchestrator | ok: [testbed-node-0] 2025-05-23 01:03:39.093743 | orchestrator | ok: [testbed-node-1] 2025-05-23 01:03:39.093753 | orchestrator | ok: [testbed-node-2] 2025-05-23 01:03:39.093763 | orchestrator | ok: [testbed-node-3] 2025-05-23 01:03:39.093774 | orchestrator | ok: [testbed-node-4] 2025-05-23 01:03:39.093784 | orchestrator | ok: [testbed-node-5] 2025-05-23 01:03:39.093795 | orchestrator | 2025-05-23 01:03:39.093805 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-05-23 01:03:39.093817 | orchestrator | Friday 23 May 2025 00:58:52 +0000 (0:00:01.166) 0:00:04.172 ************ 2025-05-23 01:03:39.093828 | orchestrator | ok: [testbed-node-0] 2025-05-23 01:03:39.093864 | orchestrator | ok: [testbed-node-2] 2025-05-23 01:03:39.093885 | orchestrator | ok: [testbed-node-1] 2025-05-23 01:03:39.093896 | orchestrator | ok: [testbed-node-3] 2025-05-23 01:03:39.093907 | orchestrator | ok: [testbed-node-4] 2025-05-23 01:03:39.093917 | orchestrator | ok: [testbed-node-5] 2025-05-23 01:03:39.094155 | orchestrator | 2025-05-23 01:03:39.094180 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-05-23 01:03:39.094191 | orchestrator | Friday 23 May 2025 00:58:53 +0000 (0:00:01.042) 0:00:05.215 ************ 2025-05-23 01:03:39.094202 | orchestrator | ok: [testbed-node-0] => { 2025-05-23 01:03:39.094214 | orchestrator |  "changed": false, 2025-05-23 01:03:39.094224 | orchestrator |  "msg": "All assertions passed" 2025-05-23 01:03:39.094235 | orchestrator | } 2025-05-23 01:03:39.094246 | orchestrator | ok: [testbed-node-1] => { 2025-05-23 01:03:39.094257 | orchestrator |  "changed": false, 2025-05-23 01:03:39.094267 | orchestrator |  "msg": "All assertions passed" 2025-05-23 01:03:39.094278 | orchestrator | } 2025-05-23 01:03:39.094289 | orchestrator | ok: [testbed-node-2] => { 2025-05-23 01:03:39.094300 | orchestrator |  "changed": false, 2025-05-23 01:03:39.094310 | orchestrator |  "msg": "All assertions passed" 2025-05-23 01:03:39.094321 | orchestrator | } 2025-05-23 01:03:39.094331 | orchestrator | ok: [testbed-node-3] => { 2025-05-23 01:03:39.094342 | orchestrator |  "changed": false, 2025-05-23 01:03:39.094353 | orchestrator |  "msg": "All assertions passed" 2025-05-23 01:03:39.094363 | orchestrator | } 2025-05-23 01:03:39.094374 | orchestrator | ok: [testbed-node-4] => { 2025-05-23 01:03:39.094385 | orchestrator |  "changed": false, 2025-05-23 01:03:39.094396 | orchestrator |  "msg": "All assertions passed" 2025-05-23 01:03:39.094406 | orchestrator | } 2025-05-23 01:03:39.094417 | orchestrator | ok: [testbed-node-5] => { 2025-05-23 01:03:39.094427 | orchestrator |  "changed": false, 2025-05-23 01:03:39.094438 | orchestrator |  "msg": "All assertions passed" 2025-05-23 01:03:39.094448 | orchestrator | } 2025-05-23 01:03:39.094459 | orchestrator | 2025-05-23 01:03:39.094469 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-05-23 01:03:39.094480 | orchestrator | Friday 23 May 2025 00:58:54 +0000 (0:00:00.630) 0:00:05.845 ************ 2025-05-23 01:03:39.094491 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:03:39.094501 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:03:39.094512 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:03:39.094522 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:03:39.094532 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:03:39.094543 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:03:39.094553 | orchestrator | 2025-05-23 01:03:39.094564 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-05-23 01:03:39.094642 | orchestrator | Friday 23 May 2025 00:58:55 +0000 (0:00:00.672) 0:00:06.517 ************ 2025-05-23 01:03:39.094656 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-05-23 01:03:39.094733 | orchestrator | 2025-05-23 01:03:39.094745 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-05-23 01:03:39.094756 | orchestrator | Friday 23 May 2025 00:58:58 +0000 (0:00:03.390) 0:00:09.908 ************ 2025-05-23 01:03:39.094767 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-05-23 01:03:39.094778 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-05-23 01:03:39.094789 | orchestrator | 2025-05-23 01:03:39.094812 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-05-23 01:03:39.094914 | orchestrator | Friday 23 May 2025 00:59:04 +0000 (0:00:06.145) 0:00:16.054 ************ 2025-05-23 01:03:39.094926 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-23 01:03:39.094937 | orchestrator | 2025-05-23 01:03:39.094982 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-05-23 01:03:39.094994 | orchestrator | Friday 23 May 2025 00:59:08 +0000 (0:00:03.383) 0:00:19.437 ************ 2025-05-23 01:03:39.095005 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-23 01:03:39.095042 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-05-23 01:03:39.095062 | orchestrator | 2025-05-23 01:03:39.095080 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-05-23 01:03:39.095096 | orchestrator | Friday 23 May 2025 00:59:11 +0000 (0:00:03.765) 0:00:23.203 ************ 2025-05-23 01:03:39.095107 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-23 01:03:39.095118 | orchestrator | 2025-05-23 01:03:39.095129 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-05-23 01:03:39.095139 | orchestrator | Friday 23 May 2025 00:59:15 +0000 (0:00:03.187) 0:00:26.392 ************ 2025-05-23 01:03:39.095150 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-05-23 01:03:39.095160 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-05-23 01:03:39.095171 | orchestrator | 2025-05-23 01:03:39.095181 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-23 01:03:39.095192 | orchestrator | Friday 23 May 2025 00:59:23 +0000 (0:00:08.475) 0:00:34.867 ************ 2025-05-23 01:03:39.095203 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:03:39.095213 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:03:39.095224 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:03:39.095252 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:03:39.095264 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:03:39.095274 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:03:39.095295 | orchestrator | 2025-05-23 01:03:39.095321 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-05-23 01:03:39.095332 | orchestrator | Friday 23 May 2025 00:59:24 +0000 (0:00:01.105) 0:00:35.973 ************ 2025-05-23 01:03:39.095353 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:03:39.095364 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:03:39.095398 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:03:39.095409 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:03:39.095420 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:03:39.095430 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:03:39.095441 | orchestrator | 2025-05-23 01:03:39.095451 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-05-23 01:03:39.095462 | orchestrator | Friday 23 May 2025 00:59:29 +0000 (0:00:05.150) 0:00:41.123 ************ 2025-05-23 01:03:39.095473 | orchestrator | ok: [testbed-node-2] 2025-05-23 01:03:39.095505 | orchestrator | ok: [testbed-node-1] 2025-05-23 01:03:39.095517 | orchestrator | ok: [testbed-node-0] 2025-05-23 01:03:39.095527 | orchestrator | ok: [testbed-node-3] 2025-05-23 01:03:39.095538 | orchestrator | ok: [testbed-node-4] 2025-05-23 01:03:39.095557 | orchestrator | ok: [testbed-node-5] 2025-05-23 01:03:39.095568 | orchestrator | 2025-05-23 01:03:39.095579 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-05-23 01:03:39.095589 | orchestrator | Friday 23 May 2025 00:59:30 +0000 (0:00:00.952) 0:00:42.075 ************ 2025-05-23 01:03:39.095600 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:03:39.095610 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:03:39.095621 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:03:39.095631 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:03:39.095642 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:03:39.095652 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:03:39.095662 | orchestrator | 2025-05-23 01:03:39.095673 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-05-23 01:03:39.095684 | orchestrator | Friday 23 May 2025 00:59:34 +0000 (0:00:03.735) 0:00:45.811 ************ 2025-05-23 01:03:39.095698 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-23 01:03:39.095726 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.095751 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.095763 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.095839 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.095853 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.095865 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.095884 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.095898 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.095916 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-23 01:03:39.095950 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.095971 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.095990 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.096091 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.096116 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.096145 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.096169 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.096188 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.096208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-23 01:03:39.096239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-23 01:03:39.096261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.096305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.096319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.096330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.096350 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-23 01:03:39.096362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.096382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.096399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-23 01:03:39.096410 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.096429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.096441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.096459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.096474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.096486 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.096497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.096509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.096526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.096544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.096555 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.096572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.096583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.096594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.096611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.096622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.096637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.096652 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.096662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.096672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.096682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.096698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.096714 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.096731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.096742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.096753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.096769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.096786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.096796 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.096810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.096821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.096831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.096846 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.096862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.096872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.096886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.096896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.096906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.096916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.096942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.096953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.096963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.096977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.096988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.096998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.097047 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-23 01:03:39.097059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.097073 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.097083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.097094 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.097117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.097128 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.097138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.097152 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.097163 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.097179 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.097789 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-23 01:03:39.097810 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.097821 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.097837 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.097848 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.097858 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.097878 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.097895 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.097906 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.097920 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.097931 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-23 01:03:39.098008 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.098170 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.098184 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.098195 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.098211 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.098319 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.098352 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.098364 | orchestrator | 2025-05-23 01:03:39.098374 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-05-23 01:03:39.098386 | orchestrator | Friday 23 May 2025 00:59:37 +0000 (0:00:03.315) 0:00:49.126 ************ 2025-05-23 01:03:39.098398 | orchestrator | [WARNING]: Skipped 2025-05-23 01:03:39.098409 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-05-23 01:03:39.098421 | orchestrator | due to this access issue: 2025-05-23 01:03:39.098439 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-05-23 01:03:39.098450 | orchestrator | a directory 2025-05-23 01:03:39.098462 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-23 01:03:39.098472 | orchestrator | 2025-05-23 01:03:39.098483 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-23 01:03:39.098495 | orchestrator | Friday 23 May 2025 00:59:38 +0000 (0:00:00.784) 0:00:49.911 ************ 2025-05-23 01:03:39.098506 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 01:03:39.098517 | orchestrator | 2025-05-23 01:03:39.098529 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-05-23 01:03:39.098540 | orchestrator | Friday 23 May 2025 00:59:39 +0000 (0:00:01.296) 0:00:51.207 ************ 2025-05-23 01:03:39.098552 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-23 01:03:39.098581 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-23 01:03:39.098594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-23 01:03:39.098613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-23 01:03:39.098632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-23 01:03:39.098643 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-23 01:03:39.098655 | orchestrator | 2025-05-23 01:03:39.098666 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-05-23 01:03:39.098677 | orchestrator | Friday 23 May 2025 00:59:44 +0000 (0:00:04.639) 0:00:55.846 ************ 2025-05-23 01:03:39.098712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-23 01:03:39.098731 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:03:39.098769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-23 01:03:39.098780 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:03:39.098811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-23 01:03:39.098822 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:03:39.098832 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.098842 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:03:39.098852 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.098873 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:03:39.098883 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.098892 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:03:39.098902 | orchestrator | 2025-05-23 01:03:39.098911 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-05-23 01:03:39.098921 | orchestrator | Friday 23 May 2025 00:59:47 +0000 (0:00:03.229) 0:00:59.076 ************ 2025-05-23 01:03:39.098931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-23 01:03:39.098941 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:03:39.098956 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.098967 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:03:39.098977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-23 01:03:39.098993 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:03:39.099007 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.099058 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:03:39.099078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-23 01:03:39.099094 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:03:39.099109 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.099119 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:03:39.099129 | orchestrator | 2025-05-23 01:03:39.099144 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-05-23 01:03:39.099154 | orchestrator | Friday 23 May 2025 00:59:51 +0000 (0:00:03.692) 0:01:02.768 ************ 2025-05-23 01:03:39.099164 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:03:39.099174 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:03:39.099183 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:03:39.099193 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:03:39.099202 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:03:39.099211 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:03:39.099221 | orchestrator | 2025-05-23 01:03:39.099230 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-05-23 01:03:39.099247 | orchestrator | Friday 23 May 2025 00:59:55 +0000 (0:00:04.368) 0:01:07.137 ************ 2025-05-23 01:03:39.099263 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:03:39.099277 | orchestrator | 2025-05-23 01:03:39.099309 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-05-23 01:03:39.099412 | orchestrator | Friday 23 May 2025 00:59:55 +0000 (0:00:00.125) 0:01:07.263 ************ 2025-05-23 01:03:39.099428 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:03:39.099438 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:03:39.099457 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:03:39.099466 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:03:39.099476 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:03:39.099485 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:03:39.099494 | orchestrator | 2025-05-23 01:03:39.099504 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-05-23 01:03:39.099513 | orchestrator | Friday 23 May 2025 00:59:56 +0000 (0:00:00.944) 0:01:08.207 ************ 2025-05-23 01:03:39.099530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-23 01:03:39.099541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.099552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.099570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.099581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.099597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.099612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.099623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.099633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.099644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.099659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.099676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.099686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.099701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.099712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.099724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.099740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.099756 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:03:39.099766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-23 01:03:39.099780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.099791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.099801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.099817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.099833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.099843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.099862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.099883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.099894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.099909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.099925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.099935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-23 01:03:39.099949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.099970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.099980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.099996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.100068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.100084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.100099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.100110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.100120 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:03:39.100148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.100167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.100177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.100187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.100252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.100264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.100274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.100298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.100309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.100318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.100330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.100339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.100348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.100361 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:03:39.101224 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-23 01:03:39.101309 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.102339 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.102375 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.102387 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.102632 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.102712 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.102725 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.102737 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.102755 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.102765 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.102787 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.102866 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.102882 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.102893 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.102912 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.102923 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.102941 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:03:39.103790 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-23 01:03:39.103820 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.103831 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-23 01:03:39.103842 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.103863 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.103951 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.103965 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.103975 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.104114 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.104194 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.104217 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.104498 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.104516 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.104534 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.104552 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.104580 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.104613 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.104625 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.104716 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.104732 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.104750 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.104771 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.104783 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.104795 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.104807 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:03:39.104885 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.104901 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.104912 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.104930 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.104949 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.104961 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.105102 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.105121 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.105134 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.105161 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.105173 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:03:39.105184 | orchestrator | 2025-05-23 01:03:39.105196 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-05-23 01:03:39.105209 | orchestrator | Friday 23 May 2025 01:00:02 +0000 (0:00:05.966) 0:01:14.174 ************ 2025-05-23 01:03:39.105220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-23 01:03:39.105304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.105320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.105331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.105356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-23 01:03:39.105740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.105762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.105774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.105785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.105804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.105825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.105837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.105848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.105869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.105881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.105899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.105915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.105927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.105938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.105955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.105968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.105980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.105997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.106085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.106105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.106127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.106139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.106157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.106169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.106186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.106197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.106214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.106228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.106245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.106261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-23 01:03:39.106273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.106285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.106303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.106321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.106337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.106348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.106360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.106372 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-23 01:03:39.106393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.106414 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.106432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.106445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.106459 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.106473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.106490 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.106511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.106527 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.106539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.106550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.106569 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.106591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.106603 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.106619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.106630 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.106641 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-23 01:03:39.106658 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.106677 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.106688 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.106704 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.106716 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.106727 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.106752 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.106764 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.106775 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-23 01:03:39.106792 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.106803 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.106815 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.106839 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.106851 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.106862 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.106879 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.106890 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.106902 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.106925 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-23 01:03:39.106937 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-23 01:03:39.106948 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.106964 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.106975 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.106988 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.107005 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.107048 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.107061 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.107073 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.107089 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.107102 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.107130 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.107142 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.107153 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.107169 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.107181 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-23 01:03:39.107199 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.107215 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.107227 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.107239 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.107258 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.107270 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.107288 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.107299 | orchestrator | 2025-05-23 01:03:39.107311 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-05-23 01:03:39.107322 | orchestrator | Friday 23 May 2025 01:00:07 +0000 (0:00:05.162) 0:01:19.336 ************ 2025-05-23 01:03:39.107737 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-23 01:03:39.107762 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.107781 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.107793 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.107814 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.107898 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.107915 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.107927 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.107938 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.107955 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-23 01:03:39.107974 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.108078 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.108095 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.108107 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.108124 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.108143 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.108155 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.108167 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.108268 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-23 01:03:39.108285 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.108301 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.108320 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.108332 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.108586 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.108604 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.108616 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.108633 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.108653 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-23 01:03:39.108665 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.108958 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.109633 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.109689 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.109746 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.109766 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.110412 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.110512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-23 01:03:39.110530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.110542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.110591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.110604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.110640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.110653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.110666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.110678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.110703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.110715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.110727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.110747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.110759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.110770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.110797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.110809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.110821 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-23 01:03:39.110839 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.110851 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.110870 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.110886 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.110897 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.110910 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.110927 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.110940 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-23 01:03:39.110958 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.110974 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.110985 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.110997 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.111060 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.111075 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.111094 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.111111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-23 01:03:39.111402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.111423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.111435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.111455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.111472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.111483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.111501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.111513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.111539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.111563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.111575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.111591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-23 01:03:39.111603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.111615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.111633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.111652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.111668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.111681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.111692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.111709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.111729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.111740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.111756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.111768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.111779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.111797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.111862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.111873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.111885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.111902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.111914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.111931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.111950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.111962 | orchestrator | 2025-05-23 01:03:39.111975 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-05-23 01:03:39.111987 | orchestrator | Friday 23 May 2025 01:00:15 +0000 (0:00:07.799) 0:01:27.136 ************ 2025-05-23 01:03:39.111999 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-23 01:03:39.112037 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-23 01:03:39.112050 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.112077 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.112088 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.112100 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.112116 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.112127 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.112138 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.112163 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.112176 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.112191 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-23 01:03:39.112204 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.112215 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.112238 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.112251 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.112262 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.112273 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.112293 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.112305 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.112323 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.112341 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.112353 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.112364 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.112380 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.112400 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.112418 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.112449 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.112462 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.112474 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.112490 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.112502 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.112520 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.112531 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.112550 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.112562 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.112573 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.112589 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.112601 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.112618 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.112630 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.112648 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.112660 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.112676 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.112695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-23 01:03:39.112713 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.112726 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.112737 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.112753 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.112771 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.112783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.112800 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.112857 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:03:39.112872 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.112883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.112906 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:03:39.112918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.112930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.112941 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:03:39.112958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.112970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.112981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.112992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.113065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.113080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.113091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.113108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.113120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.113132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.113155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.113167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.113179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-23 01:03:39.113196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.113208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.113230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.113241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.113253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.113270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.113281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.113293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.113318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.113330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.113341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-23 01:03:39.113358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.113370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.113382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.113404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.113416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.113427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.113445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.113458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.113475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.113491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.113502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.113514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.113530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.113543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.113561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.113577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.113588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.113600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.113611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.113629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.113648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.113664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.113675 | orchestrator | 2025-05-23 01:03:39.113686 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-05-23 01:03:39.113697 | orchestrator | Friday 23 May 2025 01:00:18 +0000 (0:00:03.155) 0:01:30.292 ************ 2025-05-23 01:03:39.113709 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:03:39.113720 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:03:39.113731 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:03:39.113741 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:03:39.113752 | orchestrator | changed: [testbed-node-2] 2025-05-23 01:03:39.113762 | orchestrator | changed: [testbed-node-1] 2025-05-23 01:03:39.113773 | orchestrator | 2025-05-23 01:03:39.113784 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-05-23 01:03:39.113795 | orchestrator | Friday 23 May 2025 01:00:23 +0000 (0:00:04.640) 0:01:34.932 ************ 2025-05-23 01:03:39.113807 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-23 01:03:39.113825 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.113847 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.113858 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.113874 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.113886 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.113897 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.113914 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.113932 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.113943 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.113959 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.113970 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.113981 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.113993 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.114171 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.114193 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.114211 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.114223 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:03:39.114235 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-23 01:03:39.114246 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.114281 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.114303 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.114320 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.114332 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.114344 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.114356 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.114381 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.114394 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.114406 | orchestrator | [0;36mskipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.114422 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.114434 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.114446 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.114471 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.114484 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.114496 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.114508 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:03:39.114524 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-23 01:03:39.114536 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.114560 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.114573 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.114584 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.114600 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.114612 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.114624 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.114641 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.114659 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.114671 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.114682 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.114702 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.114713 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.114731 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.114749 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.114761 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.114772 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:03:39.114789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-23 01:03:39.114801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-23 01:03:39.114818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.114836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.114848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.114864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-23 01:03:39.114876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.114893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.114912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.114924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.114935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.114951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.114969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.114986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.114998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.115009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.115079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.115099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.115111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.115122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.115140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.115152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.115164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.115175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.115187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.115205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.115243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.115261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.115274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.115285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.115308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.115320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.115331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.115348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.115361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.115372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.115388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.115410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.115422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.115433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.115451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.115463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.115474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.115497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.115509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.115526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.115538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.115550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.115572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.115583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.115593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.115603 | orchestrator | 2025-05-23 01:03:39.115613 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-05-23 01:03:39.115623 | orchestrator | Friday 23 May 2025 01:00:27 +0000 (0:00:04.234) 0:01:39.167 ************ 2025-05-23 01:03:39.115633 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:03:39.115647 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:03:39.115658 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:03:39.115668 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:03:39.115677 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:03:39.115687 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:03:39.115696 | orchestrator | 2025-05-23 01:03:39.115706 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-05-23 01:03:39.115716 | orchestrator | Friday 23 May 2025 01:00:30 +0000 (0:00:02.912) 0:01:42.079 ************ 2025-05-23 01:03:39.115725 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:03:39.115735 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:03:39.115745 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:03:39.115754 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:03:39.115763 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:03:39.115773 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:03:39.115782 | orchestrator | 2025-05-23 01:03:39.115792 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-05-23 01:03:39.115807 | orchestrator | Friday 23 May 2025 01:00:33 +0000 (0:00:02.476) 0:01:44.555 ************ 2025-05-23 01:03:39.115817 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:03:39.115826 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:03:39.115836 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:03:39.115845 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:03:39.115854 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:03:39.115864 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:03:39.115873 | orchestrator | 2025-05-23 01:03:39.115883 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-05-23 01:03:39.115892 | orchestrator | Friday 23 May 2025 01:00:35 +0000 (0:00:02.335) 0:01:46.891 ************ 2025-05-23 01:03:39.115902 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:03:39.115911 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:03:39.115921 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:03:39.115930 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:03:39.115940 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:03:39.115949 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:03:39.115959 | orchestrator | 2025-05-23 01:03:39.115968 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-05-23 01:03:39.115978 | orchestrator | Friday 23 May 2025 01:00:37 +0000 (0:00:02.177) 0:01:49.069 ************ 2025-05-23 01:03:39.115987 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:03:39.115997 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:03:39.116006 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:03:39.116030 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:03:39.116040 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:03:39.116050 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:03:39.116060 | orchestrator | 2025-05-23 01:03:39.116069 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-05-23 01:03:39.116079 | orchestrator | Friday 23 May 2025 01:00:40 +0000 (0:00:02.831) 0:01:51.900 ************ 2025-05-23 01:03:39.116088 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:03:39.116102 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:03:39.116112 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:03:39.116121 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:03:39.116131 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:03:39.116140 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:03:39.116150 | orchestrator | 2025-05-23 01:03:39.116159 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-05-23 01:03:39.116169 | orchestrator | Friday 23 May 2025 01:00:42 +0000 (0:00:02.392) 0:01:54.293 ************ 2025-05-23 01:03:39.116178 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-23 01:03:39.116188 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:03:39.116198 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-23 01:03:39.116208 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:03:39.116217 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-23 01:03:39.116227 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:03:39.116237 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-23 01:03:39.116246 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:03:39.116256 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-23 01:03:39.116266 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:03:39.116275 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-23 01:03:39.116285 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:03:39.116294 | orchestrator | 2025-05-23 01:03:39.116304 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-05-23 01:03:39.116313 | orchestrator | Friday 23 May 2025 01:00:44 +0000 (0:00:02.038) 0:01:56.331 ************ 2025-05-23 01:03:39.116335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-23 01:03:39.116347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.116357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.116371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.116382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.116397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.116655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.116681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.116692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.116708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.116719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.116729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.116748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.116826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.116847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.116863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.116874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.116891 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:03:39.116901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-23 01:03:39.116969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.116990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.117000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.117035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.117113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.117126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.117261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.117278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.117289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.117306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.117317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.117334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.117345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.117419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.117440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.117456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.117477 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:03:39.117487 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-23 01:03:39.117497 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.117569 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.117589 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.117605 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.117623 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.117634 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.117678 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.117755 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.117776 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.117786 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.117809 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.117819 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.117829 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.117900 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.117922 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.117932 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.117954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-23 01:03:39.117997 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:03:39.118009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.118193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.118216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.118227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.118252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.118263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.118273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.118282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.118340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.118357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.118371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.118384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.118392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.118488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.118508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.118517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.118534 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:03:39.118547 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-23 01:03:39.118556 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.118564 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.118624 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.118641 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.118655 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.118668 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.118676 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.118685 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.118781 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.118801 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.118816 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.118829 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.118838 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.118846 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.118905 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.118921 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.118935 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:03:39.118944 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-23 01:03:39.118956 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.118965 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.119067 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.119082 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.119097 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.119106 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.119119 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.119127 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.119135 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.119198 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.119220 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.119229 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.119269 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.119280 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.119289 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.119351 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.119374 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:03:39.119382 | orchestrator | 2025-05-23 01:03:39.119390 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-05-23 01:03:39.119399 | orchestrator | Friday 23 May 2025 01:00:47 +0000 (0:00:02.477) 0:01:58.809 ************ 2025-05-23 01:03:39.119408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-23 01:03:39.119421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.119429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.119437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.119495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.119517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.119525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.119541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.119550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.119559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.119652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.119675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.119684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.119692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.119705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.119715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.119771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.119793 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:03:39.119801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-23 01:03:39.119810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.119822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.119831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.119916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.119951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.119960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.119969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.119981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.119990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.119998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.120080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.120098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.120107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.120120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.120129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.120138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.120180 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:03:39.120245 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-23 01:03:39.120262 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.120275 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.120284 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.120299 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.120357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-23 01:03:39.120373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.120385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.120394 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.120402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.120417 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.120498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.120518 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-23 01:03:39.120531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.120540 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.120557 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.120566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.120628 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.120644 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.120652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.120665 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.120680 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.120688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.120785 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.120807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.120826 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.120840 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.120869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.120885 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.120980 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.121006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.121079 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.121095 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.121110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.121126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.121135 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.121206 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.121224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.121236 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.121250 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.121257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.121308 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.121318 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.121330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.121341 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.121353 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.121360 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:03:39.121368 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.121375 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.121425 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:03:39.121435 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.121447 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.121462 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.121469 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:03:39.121476 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-23 01:03:39.121503 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.121511 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.121518 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.121532 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.121540 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.121547 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.121554 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.121579 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.121587 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.121602 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.121612 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.121619 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.121626 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.121652 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.121660 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.121674 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.121681 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:03:39.121688 | orchestrator | 2025-05-23 01:03:39.121699 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-05-23 01:03:39.121706 | orchestrator | Friday 23 May 2025 01:00:50 +0000 (0:00:02.865) 0:02:01.674 ************ 2025-05-23 01:03:39.121712 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:03:39.121719 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:03:39.121726 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:03:39.121733 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:03:39.121739 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:03:39.121746 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:03:39.121752 | orchestrator | 2025-05-23 01:03:39.121759 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-05-23 01:03:39.121766 | orchestrator | Friday 23 May 2025 01:00:53 +0000 (0:00:02.708) 0:02:04.383 ************ 2025-05-23 01:03:39.121772 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:03:39.121779 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:03:39.121785 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:03:39.121792 | orchestrator | changed: [testbed-node-4] 2025-05-23 01:03:39.121799 | orchestrator | changed: [testbed-node-3] 2025-05-23 01:03:39.121805 | orchestrator | changed: [testbed-node-5] 2025-05-23 01:03:39.121812 | orchestrator | 2025-05-23 01:03:39.121818 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-05-23 01:03:39.121825 | orchestrator | Friday 23 May 2025 01:00:58 +0000 (0:00:05.788) 0:02:10.171 ************ 2025-05-23 01:03:39.121832 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:03:39.121839 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:03:39.121845 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:03:39.121852 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:03:39.121858 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:03:39.121865 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:03:39.121871 | orchestrator | 2025-05-23 01:03:39.121878 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-05-23 01:03:39.121885 | orchestrator | Friday 23 May 2025 01:01:01 +0000 (0:00:02.178) 0:02:12.349 ************ 2025-05-23 01:03:39.121891 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:03:39.121898 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:03:39.121905 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:03:39.121954 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:03:39.121961 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:03:39.121967 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:03:39.121974 | orchestrator | 2025-05-23 01:03:39.121980 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-05-23 01:03:39.121987 | orchestrator | Friday 23 May 2025 01:01:03 +0000 (0:00:02.237) 0:02:14.587 ************ 2025-05-23 01:03:39.121993 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:03:39.122000 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:03:39.122006 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:03:39.122053 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:03:39.122071 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:03:39.122078 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:03:39.122086 | orchestrator | 2025-05-23 01:03:39.122094 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-05-23 01:03:39.122102 | orchestrator | Friday 23 May 2025 01:01:05 +0000 (0:00:02.564) 0:02:17.151 ************ 2025-05-23 01:03:39.122133 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:03:39.122142 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:03:39.122149 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:03:39.122157 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:03:39.122164 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:03:39.122172 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:03:39.122180 | orchestrator | 2025-05-23 01:03:39.122188 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-05-23 01:03:39.122196 | orchestrator | Friday 23 May 2025 01:01:08 +0000 (0:00:02.521) 0:02:19.672 ************ 2025-05-23 01:03:39.122204 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:03:39.122212 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:03:39.122220 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:03:39.122227 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:03:39.122235 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:03:39.122244 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:03:39.122252 | orchestrator | 2025-05-23 01:03:39.122259 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-05-23 01:03:39.122267 | orchestrator | Friday 23 May 2025 01:01:10 +0000 (0:00:02.485) 0:02:22.158 ************ 2025-05-23 01:03:39.122275 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:03:39.122282 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:03:39.122290 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:03:39.122298 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:03:39.122305 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:03:39.122313 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:03:39.122321 | orchestrator | 2025-05-23 01:03:39.122329 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-05-23 01:03:39.122337 | orchestrator | Friday 23 May 2025 01:01:14 +0000 (0:00:03.193) 0:02:25.351 ************ 2025-05-23 01:03:39.122345 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:03:39.122353 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:03:39.122360 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:03:39.122368 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:03:39.122376 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:03:39.122384 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:03:39.122392 | orchestrator | 2025-05-23 01:03:39.122400 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-05-23 01:03:39.122409 | orchestrator | Friday 23 May 2025 01:01:17 +0000 (0:00:03.357) 0:02:28.709 ************ 2025-05-23 01:03:39.122416 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:03:39.122423 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:03:39.122430 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:03:39.122436 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:03:39.122443 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:03:39.122449 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:03:39.122456 | orchestrator | 2025-05-23 01:03:39.122462 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-05-23 01:03:39.122469 | orchestrator | Friday 23 May 2025 01:01:19 +0000 (0:00:02.007) 0:02:30.717 ************ 2025-05-23 01:03:39.122480 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-23 01:03:39.122487 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:03:39.122493 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-23 01:03:39.122500 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:03:39.122511 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-23 01:03:39.122518 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:03:39.122525 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-23 01:03:39.122531 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:03:39.122538 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-23 01:03:39.122545 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:03:39.122551 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-23 01:03:39.122558 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:03:39.122565 | orchestrator | 2025-05-23 01:03:39.122571 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-05-23 01:03:39.122578 | orchestrator | Friday 23 May 2025 01:01:22 +0000 (0:00:03.312) 0:02:34.029 ************ 2025-05-23 01:03:39.122585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-23 01:03:39.122612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.122620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.122631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.122643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.122650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.122657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.122682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.122691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.122697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.122712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.122719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.122726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.122750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.122758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.122766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.122781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.122788 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:03:39.122795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-23 01:03:39.122819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.122827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.122834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.122848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.122855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.122862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.122869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.122893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.122901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.122919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.122926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.122933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.122940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.122965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.122974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.122988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.122995 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:03:39.123002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-23 01:03:39.123009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.123074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.123083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.123095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.123105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.123113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.123120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.123145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.123153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.123164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.123174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.123182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.123189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.123214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.123222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.123234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.123241 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:03:39.123251 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-23 01:03:39.123259 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.123266 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.123291 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.123304 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.123315 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.123322 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.123329 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.123336 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.123362 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.123375 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.123382 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.123392 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-23 01:03:39.123399 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.123406 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.123432 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.123444 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.123457 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.123464 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.123471 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.123496 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.123509 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:03:39.123516 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.123523 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.123533 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.123541 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.123548 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.123555 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.123584 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.123593 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.123600 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.123610 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.123617 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.123624 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.123654 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.123662 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:03:39.123669 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-23 01:03:39.123679 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.123686 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.123693 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.123723 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.123731 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.123738 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.123748 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.123755 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.123762 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.123792 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.123800 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.123808 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.123814 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.123825 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.123832 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.123862 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.123870 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:03:39.123877 | orchestrator | 2025-05-23 01:03:39.123884 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-05-23 01:03:39.123891 | orchestrator | Friday 23 May 2025 01:01:25 +0000 (0:00:02.576) 0:02:36.606 ************ 2025-05-23 01:03:39.123898 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-23 01:03:39.123908 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.123915 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.123926 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.123952 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.123960 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-23 01:03:39.123967 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.123977 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.123990 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.123998 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.124072 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.124083 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.124090 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.124101 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.124113 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.124120 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.124127 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.124137 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.124145 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-23 01:03:39.124155 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.124170 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.124178 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.124189 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.124197 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.124204 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.124214 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.124225 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.124232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-23 01:03:39.124243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.124250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.124257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.124271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.124278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.124285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.124295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.124303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.124310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.124320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.124331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.124338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.124345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.124355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.124363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.124378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.124386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-23 01:03:39.124393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.124403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.124410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.124420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.124432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.124439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.124446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.124457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.124464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.124476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-23 01:03:39.124488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.124494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.124501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.124511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.124517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.124546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.124554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.124560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.124570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.124576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.124583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-23 01:03:39.124597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.124603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.124610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.124616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.124626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.124633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.124644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.124653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.124660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.124667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.124677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.124690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.124700 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-23 01:03:39.124706 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.124713 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.124719 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-23 01:03:39.124729 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.124740 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.124747 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.124757 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.124764 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.124770 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.124780 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.124791 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.124797 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.124807 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.124814 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.124821 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.124831 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-23 01:03:39.124841 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.124848 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:03:39.124859 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:03:39.124866 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.124875 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-23 01:03:39.124886 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-23 01:03:39.124892 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-23 01:03:39.124899 | orchestrator | 2025-05-23 01:03:39.124905 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-23 01:03:39.124911 | orchestrator | Friday 23 May 2025 01:01:27 +0000 (0:00:02.688) 0:02:39.294 ************ 2025-05-23 01:03:39.124918 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:03:39.124924 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:03:39.124930 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:03:39.124937 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:03:39.124943 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:03:39.124949 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:03:39.124955 | orchestrator | 2025-05-23 01:03:39.124961 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-05-23 01:03:39.124967 | orchestrator | Friday 23 May 2025 01:01:28 +0000 (0:00:00.697) 0:02:39.992 ************ 2025-05-23 01:03:39.124976 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:03:39.124982 | orchestrator | 2025-05-23 01:03:39.124988 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-05-23 01:03:39.124995 | orchestrator | Friday 23 May 2025 01:01:31 +0000 (0:00:02.409) 0:02:42.401 ************ 2025-05-23 01:03:39.125001 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:03:39.125007 | orchestrator | 2025-05-23 01:03:39.125028 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-05-23 01:03:39.125036 | orchestrator | Friday 23 May 2025 01:01:33 +0000 (0:00:02.261) 0:02:44.662 ************ 2025-05-23 01:03:39.125042 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:03:39.125048 | orchestrator | 2025-05-23 01:03:39.125054 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-23 01:03:39.125060 | orchestrator | Friday 23 May 2025 01:02:13 +0000 (0:00:39.803) 0:03:24.466 ************ 2025-05-23 01:03:39.125067 | orchestrator | 2025-05-23 01:03:39.125073 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-23 01:03:39.125079 | orchestrator | Friday 23 May 2025 01:02:13 +0000 (0:00:00.055) 0:03:24.522 ************ 2025-05-23 01:03:39.125085 | orchestrator | 2025-05-23 01:03:39.125091 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-23 01:03:39.125097 | orchestrator | Friday 23 May 2025 01:02:13 +0000 (0:00:00.307) 0:03:24.829 ************ 2025-05-23 01:03:39.125103 | orchestrator | 2025-05-23 01:03:39.125109 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-23 01:03:39.125116 | orchestrator | Friday 23 May 2025 01:02:13 +0000 (0:00:00.055) 0:03:24.884 ************ 2025-05-23 01:03:39.125126 | orchestrator | 2025-05-23 01:03:39.125132 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-23 01:03:39.125138 | orchestrator | Friday 23 May 2025 01:02:13 +0000 (0:00:00.054) 0:03:24.939 ************ 2025-05-23 01:03:39.125144 | orchestrator | 2025-05-23 01:03:39.125150 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-23 01:03:39.125156 | orchestrator | Friday 23 May 2025 01:02:13 +0000 (0:00:00.054) 0:03:24.993 ************ 2025-05-23 01:03:39.125162 | orchestrator | 2025-05-23 01:03:39.125169 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-05-23 01:03:39.125175 | orchestrator | Friday 23 May 2025 01:02:13 +0000 (0:00:00.284) 0:03:25.277 ************ 2025-05-23 01:03:39.125181 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:03:39.125187 | orchestrator | changed: [testbed-node-2] 2025-05-23 01:03:39.125193 | orchestrator | changed: [testbed-node-1] 2025-05-23 01:03:39.125199 | orchestrator | 2025-05-23 01:03:39.125205 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-05-23 01:03:39.125211 | orchestrator | Friday 23 May 2025 01:02:40 +0000 (0:00:26.537) 0:03:51.814 ************ 2025-05-23 01:03:39.125217 | orchestrator | changed: [testbed-node-4] 2025-05-23 01:03:39.125223 | orchestrator | changed: [testbed-node-3] 2025-05-23 01:03:39.125230 | orchestrator | changed: [testbed-node-5] 2025-05-23 01:03:39.125236 | orchestrator | 2025-05-23 01:03:39.125242 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 01:03:39.125252 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-23 01:03:39.125259 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-05-23 01:03:39.125265 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-05-23 01:03:39.125272 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-23 01:03:39.125278 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-23 01:03:39.125284 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-23 01:03:39.125290 | orchestrator | 2025-05-23 01:03:39.125297 | orchestrator | 2025-05-23 01:03:39.125303 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-23 01:03:39.125309 | orchestrator | Friday 23 May 2025 01:03:36 +0000 (0:00:55.992) 0:04:47.806 ************ 2025-05-23 01:03:39.125315 | orchestrator | =============================================================================== 2025-05-23 01:03:39.125321 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 55.99s 2025-05-23 01:03:39.125327 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 39.80s 2025-05-23 01:03:39.125333 | orchestrator | neutron : Restart neutron-server container ----------------------------- 26.54s 2025-05-23 01:03:39.125340 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.48s 2025-05-23 01:03:39.125346 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 7.80s 2025-05-23 01:03:39.125352 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.15s 2025-05-23 01:03:39.125358 | orchestrator | neutron : Copying over existing policy file ----------------------------- 5.97s 2025-05-23 01:03:39.125364 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 5.79s 2025-05-23 01:03:39.125370 | orchestrator | neutron : Copying over config.json files for services ------------------- 5.16s 2025-05-23 01:03:39.125380 | orchestrator | Load and persist kernel modules ----------------------------------------- 5.15s 2025-05-23 01:03:39.125389 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 4.64s 2025-05-23 01:03:39.125396 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 4.64s 2025-05-23 01:03:39.125402 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 4.37s 2025-05-23 01:03:39.125408 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.23s 2025-05-23 01:03:39.125414 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.77s 2025-05-23 01:03:39.125420 | orchestrator | Setting sysctl values --------------------------------------------------- 3.74s 2025-05-23 01:03:39.125426 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.69s 2025-05-23 01:03:39.125432 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.39s 2025-05-23 01:03:39.125438 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.38s 2025-05-23 01:03:39.125444 | orchestrator | neutron : Copy neutron-l3-agent-wrapper script -------------------------- 3.36s 2025-05-23 01:03:39.125451 | orchestrator | 2025-05-23 01:03:39 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:03:39.125457 | orchestrator | 2025-05-23 01:03:39 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:03:42.139103 | orchestrator | 2025-05-23 01:03:42 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:03:42.141371 | orchestrator | 2025-05-23 01:03:42 | INFO  | Task cdc384be-1e79-4943-a7d0-8ddfd67069aa is in state STARTED 2025-05-23 01:03:42.144689 | orchestrator | 2025-05-23 01:03:42 | INFO  | Task cbbd0018-4e6f-4571-8175-0b4ae02c1e46 is in state STARTED 2025-05-23 01:03:42.144739 | orchestrator | 2025-05-23 01:03:42 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:03:42.144746 | orchestrator | 2025-05-23 01:03:42 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:03:42.144752 | orchestrator | 2025-05-23 01:03:42 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:03:45.198148 | orchestrator | 2025-05-23 01:03:45 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:03:45.199674 | orchestrator | 2025-05-23 01:03:45 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:03:45.201471 | orchestrator | 2025-05-23 01:03:45 | INFO  | Task cdc384be-1e79-4943-a7d0-8ddfd67069aa is in state SUCCESS 2025-05-23 01:03:45.203438 | orchestrator | 2025-05-23 01:03:45.203470 | orchestrator | 2025-05-23 01:03:45.203482 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-23 01:03:45.203493 | orchestrator | 2025-05-23 01:03:45.203504 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-23 01:03:45.203515 | orchestrator | Friday 23 May 2025 01:01:45 +0000 (0:00:00.301) 0:00:00.301 ************ 2025-05-23 01:03:45.203526 | orchestrator | ok: [testbed-node-0] 2025-05-23 01:03:45.203538 | orchestrator | ok: [testbed-node-1] 2025-05-23 01:03:45.203548 | orchestrator | ok: [testbed-node-2] 2025-05-23 01:03:45.203559 | orchestrator | 2025-05-23 01:03:45.203570 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-23 01:03:45.203581 | orchestrator | Friday 23 May 2025 01:01:46 +0000 (0:00:00.423) 0:00:00.724 ************ 2025-05-23 01:03:45.203591 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-05-23 01:03:45.203603 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-05-23 01:03:45.203613 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-05-23 01:03:45.203624 | orchestrator | 2025-05-23 01:03:45.203635 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-05-23 01:03:45.203667 | orchestrator | 2025-05-23 01:03:45.203679 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-23 01:03:45.203690 | orchestrator | Friday 23 May 2025 01:01:46 +0000 (0:00:00.307) 0:00:01.032 ************ 2025-05-23 01:03:45.203700 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 01:03:45.203711 | orchestrator | 2025-05-23 01:03:45.203722 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-05-23 01:03:45.203732 | orchestrator | Friday 23 May 2025 01:01:47 +0000 (0:00:00.791) 0:00:01.823 ************ 2025-05-23 01:03:45.203743 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-05-23 01:03:45.203754 | orchestrator | 2025-05-23 01:03:45.203765 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-05-23 01:03:45.203776 | orchestrator | Friday 23 May 2025 01:01:50 +0000 (0:00:03.417) 0:00:05.240 ************ 2025-05-23 01:03:45.203787 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-05-23 01:03:45.203798 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-05-23 01:03:45.203808 | orchestrator | 2025-05-23 01:03:45.203819 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-05-23 01:03:45.203880 | orchestrator | Friday 23 May 2025 01:01:57 +0000 (0:00:06.424) 0:00:11.664 ************ 2025-05-23 01:03:45.203892 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-23 01:03:45.203903 | orchestrator | 2025-05-23 01:03:45.203914 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-05-23 01:03:45.203924 | orchestrator | Friday 23 May 2025 01:02:00 +0000 (0:00:03.348) 0:00:15.013 ************ 2025-05-23 01:03:45.203935 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-23 01:03:45.203946 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-05-23 01:03:45.203957 | orchestrator | 2025-05-23 01:03:45.203968 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-05-23 01:03:45.203979 | orchestrator | Friday 23 May 2025 01:02:04 +0000 (0:00:03.862) 0:00:18.876 ************ 2025-05-23 01:03:45.203989 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-23 01:03:45.204000 | orchestrator | 2025-05-23 01:03:45.204040 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-05-23 01:03:45.204051 | orchestrator | Friday 23 May 2025 01:02:07 +0000 (0:00:03.107) 0:00:21.983 ************ 2025-05-23 01:03:45.204062 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-05-23 01:03:45.204073 | orchestrator | 2025-05-23 01:03:45.204084 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-05-23 01:03:45.204094 | orchestrator | Friday 23 May 2025 01:02:11 +0000 (0:00:04.145) 0:00:26.128 ************ 2025-05-23 01:03:45.204106 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:03:45.204116 | orchestrator | 2025-05-23 01:03:45.204127 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-05-23 01:03:45.204138 | orchestrator | Friday 23 May 2025 01:02:14 +0000 (0:00:03.208) 0:00:29.337 ************ 2025-05-23 01:03:45.204149 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:03:45.204159 | orchestrator | 2025-05-23 01:03:45.204170 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-05-23 01:03:45.204181 | orchestrator | Friday 23 May 2025 01:02:18 +0000 (0:00:04.157) 0:00:33.494 ************ 2025-05-23 01:03:45.204192 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:03:45.204203 | orchestrator | 2025-05-23 01:03:45.204214 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-05-23 01:03:45.204225 | orchestrator | Friday 23 May 2025 01:02:22 +0000 (0:00:03.487) 0:00:36.982 ************ 2025-05-23 01:03:45.204288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-23 01:03:45.204314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-23 01:03:45.204327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-23 01:03:45.204343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-23 01:03:45.204355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-23 01:03:45.204380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-23 01:03:45.204392 | orchestrator | 2025-05-23 01:03:45.204403 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-05-23 01:03:45.204414 | orchestrator | Friday 23 May 2025 01:02:23 +0000 (0:00:01.476) 0:00:38.458 ************ 2025-05-23 01:03:45.204424 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:03:45.204435 | orchestrator | 2025-05-23 01:03:45.204446 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-05-23 01:03:45.204457 | orchestrator | Friday 23 May 2025 01:02:24 +0000 (0:00:00.117) 0:00:38.575 ************ 2025-05-23 01:03:45.204468 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:03:45.204478 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:03:45.204489 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:03:45.204500 | orchestrator | 2025-05-23 01:03:45.204512 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-05-23 01:03:45.204525 | orchestrator | Friday 23 May 2025 01:02:24 +0000 (0:00:00.384) 0:00:38.960 ************ 2025-05-23 01:03:45.204537 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-23 01:03:45.204550 | orchestrator | 2025-05-23 01:03:45.204562 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-05-23 01:03:45.204575 | orchestrator | Friday 23 May 2025 01:02:24 +0000 (0:00:00.540) 0:00:39.501 ************ 2025-05-23 01:03:45.204589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-23 01:03:45.204609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-23 01:03:45.204623 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:03:45.204636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-23 01:03:45.204661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-23 01:03:45.204673 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:03:45.204684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-23 01:03:45.204695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-23 01:03:45.204707 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:03:45.204718 | orchestrator | 2025-05-23 01:03:45.204732 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-05-23 01:03:45.204743 | orchestrator | Friday 23 May 2025 01:02:25 +0000 (0:00:01.027) 0:00:40.528 ************ 2025-05-23 01:03:45.204754 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:03:45.204765 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:03:45.204776 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:03:45.204786 | orchestrator | 2025-05-23 01:03:45.204797 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-23 01:03:45.204808 | orchestrator | Friday 23 May 2025 01:02:26 +0000 (0:00:00.429) 0:00:40.958 ************ 2025-05-23 01:03:45.204824 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 01:03:45.204835 | orchestrator | 2025-05-23 01:03:45.204846 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-05-23 01:03:45.204856 | orchestrator | Friday 23 May 2025 01:02:28 +0000 (0:00:01.746) 0:00:42.705 ************ 2025-05-23 01:03:45.204868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-23 01:03:45.204885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-23 01:03:45.204897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-23 01:03:45.204913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-23 01:03:45.204931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-23 01:03:45.204943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-23 01:03:45.204954 | orchestrator | 2025-05-23 01:03:45.204965 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-05-23 01:03:45.204976 | orchestrator | Friday 23 May 2025 01:02:30 +0000 (0:00:02.783) 0:00:45.488 ************ 2025-05-23 01:03:45.204994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-23 01:03:45.205031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-23 01:03:45.205043 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:03:45.205059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-23 01:03:45.205078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-23 01:03:45.205089 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:03:45.205101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-23 01:03:45.205119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-23 01:03:45.205130 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:03:45.205141 | orchestrator | 2025-05-23 01:03:45.205152 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-05-23 01:03:45.205163 | orchestrator | Friday 23 May 2025 01:02:31 +0000 (0:00:00.870) 0:00:46.359 ************ 2025-05-23 01:03:45.205174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-23 01:03:45.205210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-23 01:03:45.205232 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:03:45.205252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-23 01:03:45.205279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-23 01:03:45.205293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-23 01:03:45.205304 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:03:45.205315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-23 01:03:45.205333 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:03:45.205344 | orchestrator | 2025-05-23 01:03:45.205355 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-05-23 01:03:45.205366 | orchestrator | Friday 23 May 2025 01:02:33 +0000 (0:00:02.031) 0:00:48.391 ************ 2025-05-23 01:03:45.205382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-23 01:03:45.205394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-23 01:03:45.205412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-23 01:03:45.205424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-23 01:03:45.205446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-23 01:03:45.205458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-23 01:03:45.205469 | orchestrator | 2025-05-23 01:03:45.205480 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-05-23 01:03:45.205491 | orchestrator | Friday 23 May 2025 01:02:36 +0000 (0:00:03.125) 0:00:51.517 ************ 2025-05-23 01:03:45.205502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-23 01:03:45.205520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-23 01:03:45.205532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-23 01:03:45.205552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-23 01:03:45.205564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-23 01:03:45.205576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-23 01:03:45.205587 | orchestrator | 2025-05-23 01:03:45.205598 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-05-23 01:03:45.205614 | orchestrator | Friday 23 May 2025 01:02:48 +0000 (0:00:11.616) 0:01:03.133 ************ 2025-05-23 01:03:45.205626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-23 01:03:45.205643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-23 01:03:45.205654 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:03:45.205670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-23 01:03:45.205681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-23 01:03:45.205692 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:03:45.205710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-23 01:03:45.205722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-23 01:03:45.205739 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:03:45.205750 | orchestrator | 2025-05-23 01:03:45.205761 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-05-23 01:03:45.205771 | orchestrator | Friday 23 May 2025 01:02:49 +0000 (0:00:01.356) 0:01:04.490 ************ 2025-05-23 01:03:45.205787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-23 01:03:45.205798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-23 01:03:45.205810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-23 01:03:45.205827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-23 01:03:45.205844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-23 01:03:45.205860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-23 01:03:45.205871 | orchestrator | 2025-05-23 01:03:45.205882 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-23 01:03:45.205893 | orchestrator | Friday 23 May 2025 01:02:53 +0000 (0:00:03.640) 0:01:08.130 ************ 2025-05-23 01:03:45.205904 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:03:45.205915 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:03:45.205926 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:03:45.205936 | orchestrator | 2025-05-23 01:03:45.205947 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-05-23 01:03:45.205958 | orchestrator | Friday 23 May 2025 01:02:54 +0000 (0:00:00.438) 0:01:08.568 ************ 2025-05-23 01:03:45.205968 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:03:45.205979 | orchestrator | 2025-05-23 01:03:45.205990 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-05-23 01:03:45.206001 | orchestrator | Friday 23 May 2025 01:02:56 +0000 (0:00:02.959) 0:01:11.528 ************ 2025-05-23 01:03:45.206099 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:03:45.206116 | orchestrator | 2025-05-23 01:03:45.206127 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-05-23 01:03:45.206138 | orchestrator | Friday 23 May 2025 01:02:59 +0000 (0:00:02.290) 0:01:13.818 ************ 2025-05-23 01:03:45.206149 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:03:45.206160 | orchestrator | 2025-05-23 01:03:45.206170 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-23 01:03:45.206181 | orchestrator | Friday 23 May 2025 01:03:16 +0000 (0:00:17.670) 0:01:31.489 ************ 2025-05-23 01:03:45.206192 | orchestrator | 2025-05-23 01:03:45.206203 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-23 01:03:45.206214 | orchestrator | Friday 23 May 2025 01:03:16 +0000 (0:00:00.054) 0:01:31.543 ************ 2025-05-23 01:03:45.206225 | orchestrator | 2025-05-23 01:03:45.206235 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-23 01:03:45.206246 | orchestrator | Friday 23 May 2025 01:03:17 +0000 (0:00:00.131) 0:01:31.675 ************ 2025-05-23 01:03:45.206257 | orchestrator | 2025-05-23 01:03:45.206268 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-05-23 01:03:45.206286 | orchestrator | Friday 23 May 2025 01:03:17 +0000 (0:00:00.049) 0:01:31.724 ************ 2025-05-23 01:03:45.206297 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:03:45.206307 | orchestrator | changed: [testbed-node-2] 2025-05-23 01:03:45.206318 | orchestrator | changed: [testbed-node-1] 2025-05-23 01:03:45.206329 | orchestrator | 2025-05-23 01:03:45.206340 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-05-23 01:03:45.206351 | orchestrator | Friday 23 May 2025 01:03:30 +0000 (0:00:13.650) 0:01:45.375 ************ 2025-05-23 01:03:45.206362 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:03:45.206373 | orchestrator | changed: [testbed-node-2] 2025-05-23 01:03:45.206384 | orchestrator | changed: [testbed-node-1] 2025-05-23 01:03:45.206395 | orchestrator | 2025-05-23 01:03:45.206406 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 01:03:45.206424 | orchestrator | testbed-node-0 : ok=24  changed=17  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-23 01:03:45.206437 | orchestrator | testbed-node-1 : ok=11  changed=7  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-23 01:03:45.206448 | orchestrator | testbed-node-2 : ok=11  changed=7  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-23 01:03:45.206459 | orchestrator | 2025-05-23 01:03:45.206470 | orchestrator | 2025-05-23 01:03:45.206480 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-23 01:03:45.206491 | orchestrator | Friday 23 May 2025 01:03:43 +0000 (0:00:12.699) 0:01:58.074 ************ 2025-05-23 01:03:45.206502 | orchestrator | =============================================================================== 2025-05-23 01:03:45.206512 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 17.67s 2025-05-23 01:03:45.206523 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 13.65s 2025-05-23 01:03:45.206534 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 12.70s 2025-05-23 01:03:45.206544 | orchestrator | magnum : Copying over magnum.conf -------------------------------------- 11.62s 2025-05-23 01:03:45.206555 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.42s 2025-05-23 01:03:45.206566 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.16s 2025-05-23 01:03:45.206577 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.15s 2025-05-23 01:03:45.206587 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.86s 2025-05-23 01:03:45.206598 | orchestrator | magnum : Check magnum containers ---------------------------------------- 3.64s 2025-05-23 01:03:45.206609 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.49s 2025-05-23 01:03:45.206620 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.42s 2025-05-23 01:03:45.206630 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.35s 2025-05-23 01:03:45.206641 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.21s 2025-05-23 01:03:45.206652 | orchestrator | magnum : Copying over config.json files for services -------------------- 3.13s 2025-05-23 01:03:45.206662 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.11s 2025-05-23 01:03:45.206673 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.96s 2025-05-23 01:03:45.206689 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.78s 2025-05-23 01:03:45.206700 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.29s 2025-05-23 01:03:45.206711 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS key ------ 2.03s 2025-05-23 01:03:45.206722 | orchestrator | magnum : include_tasks -------------------------------------------------- 1.75s 2025-05-23 01:03:45.206732 | orchestrator | 2025-05-23 01:03:45 | INFO  | Task cbbd0018-4e6f-4571-8175-0b4ae02c1e46 is in state STARTED 2025-05-23 01:03:45.206753 | orchestrator | 2025-05-23 01:03:45 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:03:45.206764 | orchestrator | 2025-05-23 01:03:45 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:03:45.206775 | orchestrator | 2025-05-23 01:03:45 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:03:48.256520 | orchestrator | 2025-05-23 01:03:48 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:03:48.258173 | orchestrator | 2025-05-23 01:03:48 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:03:48.260233 | orchestrator | 2025-05-23 01:03:48 | INFO  | Task cbbd0018-4e6f-4571-8175-0b4ae02c1e46 is in state STARTED 2025-05-23 01:03:48.262654 | orchestrator | 2025-05-23 01:03:48 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:03:48.263970 | orchestrator | 2025-05-23 01:03:48 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:03:48.264315 | orchestrator | 2025-05-23 01:03:48 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:03:51.307262 | orchestrator | 2025-05-23 01:03:51 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:03:51.309408 | orchestrator | 2025-05-23 01:03:51 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:03:51.311350 | orchestrator | 2025-05-23 01:03:51 | INFO  | Task cbbd0018-4e6f-4571-8175-0b4ae02c1e46 is in state STARTED 2025-05-23 01:03:51.314215 | orchestrator | 2025-05-23 01:03:51 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:03:51.316209 | orchestrator | 2025-05-23 01:03:51 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:03:51.316233 | orchestrator | 2025-05-23 01:03:51 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:03:54.369147 | orchestrator | 2025-05-23 01:03:54 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:03:54.370965 | orchestrator | 2025-05-23 01:03:54 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:03:54.371440 | orchestrator | 2025-05-23 01:03:54 | INFO  | Task cbbd0018-4e6f-4571-8175-0b4ae02c1e46 is in state STARTED 2025-05-23 01:03:54.372204 | orchestrator | 2025-05-23 01:03:54 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:03:54.372948 | orchestrator | 2025-05-23 01:03:54 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:03:54.372970 | orchestrator | 2025-05-23 01:03:54 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:03:57.425400 | orchestrator | 2025-05-23 01:03:57 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:03:57.426407 | orchestrator | 2025-05-23 01:03:57 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:03:57.428465 | orchestrator | 2025-05-23 01:03:57 | INFO  | Task cbbd0018-4e6f-4571-8175-0b4ae02c1e46 is in state STARTED 2025-05-23 01:03:57.430558 | orchestrator | 2025-05-23 01:03:57 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:03:57.432282 | orchestrator | 2025-05-23 01:03:57 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:03:57.432376 | orchestrator | 2025-05-23 01:03:57 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:04:00.473977 | orchestrator | 2025-05-23 01:04:00 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:04:00.475055 | orchestrator | 2025-05-23 01:04:00 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:04:00.477557 | orchestrator | 2025-05-23 01:04:00 | INFO  | Task cbbd0018-4e6f-4571-8175-0b4ae02c1e46 is in state STARTED 2025-05-23 01:04:00.478471 | orchestrator | 2025-05-23 01:04:00 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:04:00.479670 | orchestrator | 2025-05-23 01:04:00 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:04:00.480186 | orchestrator | 2025-05-23 01:04:00 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:04:03.533991 | orchestrator | 2025-05-23 01:04:03 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:04:03.534348 | orchestrator | 2025-05-23 01:04:03 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:04:03.535209 | orchestrator | 2025-05-23 01:04:03 | INFO  | Task cbbd0018-4e6f-4571-8175-0b4ae02c1e46 is in state STARTED 2025-05-23 01:04:03.538668 | orchestrator | 2025-05-23 01:04:03 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:04:03.538715 | orchestrator | 2025-05-23 01:04:03 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:04:03.538735 | orchestrator | 2025-05-23 01:04:03 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:04:06.581672 | orchestrator | 2025-05-23 01:04:06 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:04:06.584353 | orchestrator | 2025-05-23 01:04:06 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:04:06.589635 | orchestrator | 2025-05-23 01:04:06 | INFO  | Task cbbd0018-4e6f-4571-8175-0b4ae02c1e46 is in state STARTED 2025-05-23 01:04:06.592125 | orchestrator | 2025-05-23 01:04:06 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:04:06.594091 | orchestrator | 2025-05-23 01:04:06 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:04:06.594140 | orchestrator | 2025-05-23 01:04:06 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:04:09.647764 | orchestrator | 2025-05-23 01:04:09 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:04:09.648495 | orchestrator | 2025-05-23 01:04:09 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:04:09.652904 | orchestrator | 2025-05-23 01:04:09 | INFO  | Task cbbd0018-4e6f-4571-8175-0b4ae02c1e46 is in state STARTED 2025-05-23 01:04:09.652936 | orchestrator | 2025-05-23 01:04:09 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:04:09.653802 | orchestrator | 2025-05-23 01:04:09 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:04:09.653835 | orchestrator | 2025-05-23 01:04:09 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:04:12.696321 | orchestrator | 2025-05-23 01:04:12 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:04:12.696632 | orchestrator | 2025-05-23 01:04:12 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:04:12.696834 | orchestrator | 2025-05-23 01:04:12 | INFO  | Task cbbd0018-4e6f-4571-8175-0b4ae02c1e46 is in state STARTED 2025-05-23 01:04:12.697592 | orchestrator | 2025-05-23 01:04:12 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:04:12.698189 | orchestrator | 2025-05-23 01:04:12 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:04:12.698251 | orchestrator | 2025-05-23 01:04:12 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:04:15.729034 | orchestrator | 2025-05-23 01:04:15 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:04:15.730220 | orchestrator | 2025-05-23 01:04:15 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:04:15.730935 | orchestrator | 2025-05-23 01:04:15 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:04:15.731869 | orchestrator | 2025-05-23 01:04:15 | INFO  | Task cbbd0018-4e6f-4571-8175-0b4ae02c1e46 is in state SUCCESS 2025-05-23 01:04:15.732728 | orchestrator | 2025-05-23 01:04:15 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:04:15.733918 | orchestrator | 2025-05-23 01:04:15 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:04:15.733942 | orchestrator | 2025-05-23 01:04:15 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:04:18.761492 | orchestrator | 2025-05-23 01:04:18 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:04:18.761561 | orchestrator | 2025-05-23 01:04:18 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:04:18.762148 | orchestrator | 2025-05-23 01:04:18 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:04:18.762510 | orchestrator | 2025-05-23 01:04:18 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:04:18.763215 | orchestrator | 2025-05-23 01:04:18 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:04:18.763231 | orchestrator | 2025-05-23 01:04:18 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:04:21.794115 | orchestrator | 2025-05-23 01:04:21 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:04:21.794207 | orchestrator | 2025-05-23 01:04:21 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:04:21.794561 | orchestrator | 2025-05-23 01:04:21 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:04:21.795227 | orchestrator | 2025-05-23 01:04:21 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:04:21.795623 | orchestrator | 2025-05-23 01:04:21 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:04:21.795716 | orchestrator | 2025-05-23 01:04:21 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:04:24.829052 | orchestrator | 2025-05-23 01:04:24 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:04:24.829136 | orchestrator | 2025-05-23 01:04:24 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:04:24.833713 | orchestrator | 2025-05-23 01:04:24 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:04:24.834227 | orchestrator | 2025-05-23 01:04:24 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:04:24.835010 | orchestrator | 2025-05-23 01:04:24 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:04:24.835043 | orchestrator | 2025-05-23 01:04:24 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:04:27.859883 | orchestrator | 2025-05-23 01:04:27 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:04:27.860350 | orchestrator | 2025-05-23 01:04:27 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:04:27.861254 | orchestrator | 2025-05-23 01:04:27 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:04:27.862296 | orchestrator | 2025-05-23 01:04:27 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:04:27.863888 | orchestrator | 2025-05-23 01:04:27 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:04:27.864268 | orchestrator | 2025-05-23 01:04:27 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:04:30.909517 | orchestrator | 2025-05-23 01:04:30 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:04:30.909602 | orchestrator | 2025-05-23 01:04:30 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:04:30.911950 | orchestrator | 2025-05-23 01:04:30 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:04:30.913388 | orchestrator | 2025-05-23 01:04:30 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:04:30.915173 | orchestrator | 2025-05-23 01:04:30 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:04:30.915197 | orchestrator | 2025-05-23 01:04:30 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:04:33.945338 | orchestrator | 2025-05-23 01:04:33 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:04:33.945490 | orchestrator | 2025-05-23 01:04:33 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:04:33.946940 | orchestrator | 2025-05-23 01:04:33 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:04:33.947541 | orchestrator | 2025-05-23 01:04:33 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:04:33.948228 | orchestrator | 2025-05-23 01:04:33 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:04:33.948302 | orchestrator | 2025-05-23 01:04:33 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:04:36.995287 | orchestrator | 2025-05-23 01:04:36 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:04:36.995619 | orchestrator | 2025-05-23 01:04:36 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:04:36.996702 | orchestrator | 2025-05-23 01:04:36 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:04:36.998107 | orchestrator | 2025-05-23 01:04:36 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:04:37.000605 | orchestrator | 2025-05-23 01:04:37 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:04:37.000707 | orchestrator | 2025-05-23 01:04:37 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:04:40.054206 | orchestrator | 2025-05-23 01:04:40 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:04:40.054310 | orchestrator | 2025-05-23 01:04:40 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:04:40.054324 | orchestrator | 2025-05-23 01:04:40 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:04:40.054336 | orchestrator | 2025-05-23 01:04:40 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:04:40.057033 | orchestrator | 2025-05-23 01:04:40 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:04:40.057060 | orchestrator | 2025-05-23 01:04:40 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:04:43.112452 | orchestrator | 2025-05-23 01:04:43 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:04:43.113684 | orchestrator | 2025-05-23 01:04:43 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:04:43.118536 | orchestrator | 2025-05-23 01:04:43 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:04:43.118578 | orchestrator | 2025-05-23 01:04:43 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:04:43.118586 | orchestrator | 2025-05-23 01:04:43 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:04:43.118595 | orchestrator | 2025-05-23 01:04:43 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:04:46.160695 | orchestrator | 2025-05-23 01:04:46 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:04:46.162091 | orchestrator | 2025-05-23 01:04:46 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:04:46.162777 | orchestrator | 2025-05-23 01:04:46 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:04:46.163107 | orchestrator | 2025-05-23 01:04:46 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:04:46.163819 | orchestrator | 2025-05-23 01:04:46 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:04:46.163846 | orchestrator | 2025-05-23 01:04:46 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:04:49.216638 | orchestrator | 2025-05-23 01:04:49 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:04:49.216862 | orchestrator | 2025-05-23 01:04:49 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:04:49.219047 | orchestrator | 2025-05-23 01:04:49 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:04:49.220424 | orchestrator | 2025-05-23 01:04:49 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:04:49.222065 | orchestrator | 2025-05-23 01:04:49 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:04:49.222100 | orchestrator | 2025-05-23 01:04:49 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:04:52.266420 | orchestrator | 2025-05-23 01:04:52 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:04:52.267180 | orchestrator | 2025-05-23 01:04:52 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:04:52.267209 | orchestrator | 2025-05-23 01:04:52 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:04:52.267580 | orchestrator | 2025-05-23 01:04:52 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:04:52.268379 | orchestrator | 2025-05-23 01:04:52 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:04:52.269088 | orchestrator | 2025-05-23 01:04:52 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:04:55.312694 | orchestrator | 2025-05-23 01:04:55 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:04:55.312755 | orchestrator | 2025-05-23 01:04:55 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:04:55.312839 | orchestrator | 2025-05-23 01:04:55 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:04:55.314298 | orchestrator | 2025-05-23 01:04:55 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:04:55.314314 | orchestrator | 2025-05-23 01:04:55 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:04:55.314336 | orchestrator | 2025-05-23 01:04:55 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:04:58.346776 | orchestrator | 2025-05-23 01:04:58 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:04:58.347372 | orchestrator | 2025-05-23 01:04:58 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:04:58.348167 | orchestrator | 2025-05-23 01:04:58 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:04:58.349399 | orchestrator | 2025-05-23 01:04:58 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:04:58.349993 | orchestrator | 2025-05-23 01:04:58 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state STARTED 2025-05-23 01:04:58.350064 | orchestrator | 2025-05-23 01:04:58 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:05:01.386312 | orchestrator | 2025-05-23 01:05:01 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:05:01.386575 | orchestrator | 2025-05-23 01:05:01 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:05:01.387221 | orchestrator | 2025-05-23 01:05:01 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:05:01.387823 | orchestrator | 2025-05-23 01:05:01 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:05:01.388351 | orchestrator | 2025-05-23 01:05:01 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:05:01.388685 | orchestrator | 2025-05-23 01:05:01 | INFO  | Task 1463f3b3-9dff-41e2-88ab-8b4642c1948b is in state SUCCESS 2025-05-23 01:05:01.389062 | orchestrator | 2025-05-23 01:05:01.389091 | orchestrator | 2025-05-23 01:05:01.389103 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-23 01:05:01.389114 | orchestrator | 2025-05-23 01:05:01.389125 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-23 01:05:01.389136 | orchestrator | Friday 23 May 2025 01:03:40 +0000 (0:00:00.303) 0:00:00.303 ************ 2025-05-23 01:05:01.389147 | orchestrator | ok: [testbed-node-3] 2025-05-23 01:05:01.389159 | orchestrator | ok: [testbed-node-4] 2025-05-23 01:05:01.389170 | orchestrator | ok: [testbed-node-5] 2025-05-23 01:05:01.389181 | orchestrator | ok: [testbed-node-0] 2025-05-23 01:05:01.389191 | orchestrator | ok: [testbed-node-1] 2025-05-23 01:05:01.389202 | orchestrator | ok: [testbed-node-2] 2025-05-23 01:05:01.389213 | orchestrator | ok: [testbed-manager] 2025-05-23 01:05:01.389224 | orchestrator | 2025-05-23 01:05:01.389234 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-23 01:05:01.389245 | orchestrator | Friday 23 May 2025 01:03:41 +0000 (0:00:00.909) 0:00:01.213 ************ 2025-05-23 01:05:01.389256 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-05-23 01:05:01.389268 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-05-23 01:05:01.389278 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-05-23 01:05:01.389289 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-05-23 01:05:01.389300 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-05-23 01:05:01.389311 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-05-23 01:05:01.389322 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-05-23 01:05:01.389333 | orchestrator | 2025-05-23 01:05:01.389344 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-05-23 01:05:01.389355 | orchestrator | 2025-05-23 01:05:01.389366 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-05-23 01:05:01.389377 | orchestrator | Friday 23 May 2025 01:03:42 +0000 (0:00:00.968) 0:00:02.182 ************ 2025-05-23 01:05:01.389411 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2025-05-23 01:05:01.389423 | orchestrator | 2025-05-23 01:05:01.389434 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-05-23 01:05:01.389444 | orchestrator | Friday 23 May 2025 01:03:44 +0000 (0:00:01.409) 0:00:03.591 ************ 2025-05-23 01:05:01.389455 | orchestrator | changed: [testbed-node-3] => (item=swift (object-store)) 2025-05-23 01:05:01.389466 | orchestrator | 2025-05-23 01:05:01.389476 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-05-23 01:05:01.389487 | orchestrator | Friday 23 May 2025 01:03:47 +0000 (0:00:03.229) 0:00:06.821 ************ 2025-05-23 01:05:01.389498 | orchestrator | changed: [testbed-node-3] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-05-23 01:05:01.389510 | orchestrator | changed: [testbed-node-3] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-05-23 01:05:01.389521 | orchestrator | 2025-05-23 01:05:01.389543 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-05-23 01:05:01.389554 | orchestrator | Friday 23 May 2025 01:03:53 +0000 (0:00:06.331) 0:00:13.152 ************ 2025-05-23 01:05:01.389565 | orchestrator | ok: [testbed-node-3] => (item=service) 2025-05-23 01:05:01.389576 | orchestrator | 2025-05-23 01:05:01.389586 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-05-23 01:05:01.389597 | orchestrator | Friday 23 May 2025 01:03:58 +0000 (0:00:04.402) 0:00:17.555 ************ 2025-05-23 01:05:01.389607 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-23 01:05:01.389619 | orchestrator | changed: [testbed-node-3] => (item=ceph_rgw -> service) 2025-05-23 01:05:01.389629 | orchestrator | 2025-05-23 01:05:01.389640 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-05-23 01:05:01.389650 | orchestrator | Friday 23 May 2025 01:04:01 +0000 (0:00:03.756) 0:00:21.311 ************ 2025-05-23 01:05:01.389661 | orchestrator | ok: [testbed-node-3] => (item=admin) 2025-05-23 01:05:01.389672 | orchestrator | changed: [testbed-node-3] => (item=ResellerAdmin) 2025-05-23 01:05:01.389686 | orchestrator | 2025-05-23 01:05:01.389699 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-05-23 01:05:01.389711 | orchestrator | Friday 23 May 2025 01:04:07 +0000 (0:00:06.036) 0:00:27.348 ************ 2025-05-23 01:05:01.389723 | orchestrator | changed: [testbed-node-3] => (item=ceph_rgw -> service -> admin) 2025-05-23 01:05:01.389736 | orchestrator | 2025-05-23 01:05:01.389749 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 01:05:01.389762 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 01:05:01.389774 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 01:05:01.389786 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 01:05:01.389799 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 01:05:01.389812 | orchestrator | testbed-node-3 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 01:05:01.389836 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 01:05:01.389847 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 01:05:01.389858 | orchestrator | 2025-05-23 01:05:01.389869 | orchestrator | 2025-05-23 01:05:01.389880 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-23 01:05:01.389898 | orchestrator | Friday 23 May 2025 01:04:13 +0000 (0:00:05.599) 0:00:32.947 ************ 2025-05-23 01:05:01.389913 | orchestrator | =============================================================================== 2025-05-23 01:05:01.389933 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.33s 2025-05-23 01:05:01.389976 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.04s 2025-05-23 01:05:01.389998 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.60s 2025-05-23 01:05:01.390069 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 4.40s 2025-05-23 01:05:01.390085 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.76s 2025-05-23 01:05:01.390095 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.23s 2025-05-23 01:05:01.390106 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.41s 2025-05-23 01:05:01.390117 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.97s 2025-05-23 01:05:01.390128 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.91s 2025-05-23 01:05:01.390139 | orchestrator | 2025-05-23 01:05:01.390150 | orchestrator | 2025-05-23 01:05:01 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:05:04.425886 | orchestrator | 2025-05-23 01:05:04 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:05:04.426088 | orchestrator | 2025-05-23 01:05:04 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:05:04.426476 | orchestrator | 2025-05-23 01:05:04 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:05:04.427512 | orchestrator | 2025-05-23 01:05:04 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:05:04.427537 | orchestrator | 2025-05-23 01:05:04 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:05:04.427549 | orchestrator | 2025-05-23 01:05:04 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:05:07.462336 | orchestrator | 2025-05-23 01:05:07 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:05:07.462418 | orchestrator | 2025-05-23 01:05:07 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:05:07.462579 | orchestrator | 2025-05-23 01:05:07 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:05:07.462600 | orchestrator | 2025-05-23 01:05:07 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:05:07.465087 | orchestrator | 2025-05-23 01:05:07 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:05:07.465110 | orchestrator | 2025-05-23 01:05:07 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:05:10.486430 | orchestrator | 2025-05-23 01:05:10 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:05:10.486631 | orchestrator | 2025-05-23 01:05:10 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:05:10.488219 | orchestrator | 2025-05-23 01:05:10 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:05:10.493884 | orchestrator | 2025-05-23 01:05:10 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:05:10.494427 | orchestrator | 2025-05-23 01:05:10 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:05:10.494453 | orchestrator | 2025-05-23 01:05:10 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:05:13.521041 | orchestrator | 2025-05-23 01:05:13 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:05:13.521926 | orchestrator | 2025-05-23 01:05:13 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:05:13.525708 | orchestrator | 2025-05-23 01:05:13 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:05:13.527398 | orchestrator | 2025-05-23 01:05:13 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:05:13.528861 | orchestrator | 2025-05-23 01:05:13 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:05:13.529194 | orchestrator | 2025-05-23 01:05:13 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:05:16.567370 | orchestrator | 2025-05-23 01:05:16 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:05:16.570438 | orchestrator | 2025-05-23 01:05:16 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:05:16.573717 | orchestrator | 2025-05-23 01:05:16 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:05:16.574165 | orchestrator | 2025-05-23 01:05:16 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:05:16.574879 | orchestrator | 2025-05-23 01:05:16 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:05:16.574914 | orchestrator | 2025-05-23 01:05:16 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:05:19.605600 | orchestrator | 2025-05-23 01:05:19 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:05:19.605756 | orchestrator | 2025-05-23 01:05:19 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:05:19.606352 | orchestrator | 2025-05-23 01:05:19 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:05:19.607569 | orchestrator | 2025-05-23 01:05:19 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:05:19.608116 | orchestrator | 2025-05-23 01:05:19 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:05:19.608145 | orchestrator | 2025-05-23 01:05:19 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:05:22.644241 | orchestrator | 2025-05-23 01:05:22 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:05:22.644604 | orchestrator | 2025-05-23 01:05:22 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:05:22.645558 | orchestrator | 2025-05-23 01:05:22 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:05:22.646366 | orchestrator | 2025-05-23 01:05:22 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:05:22.647881 | orchestrator | 2025-05-23 01:05:22 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:05:22.647921 | orchestrator | 2025-05-23 01:05:22 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:05:25.676455 | orchestrator | 2025-05-23 01:05:25 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:05:25.676569 | orchestrator | 2025-05-23 01:05:25 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:05:25.676858 | orchestrator | 2025-05-23 01:05:25 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:05:25.677467 | orchestrator | 2025-05-23 01:05:25 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:05:25.678742 | orchestrator | 2025-05-23 01:05:25 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:05:25.678838 | orchestrator | 2025-05-23 01:05:25 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:05:28.705794 | orchestrator | 2025-05-23 01:05:28 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:05:28.705883 | orchestrator | 2025-05-23 01:05:28 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:05:28.706122 | orchestrator | 2025-05-23 01:05:28 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:05:28.706577 | orchestrator | 2025-05-23 01:05:28 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:05:28.707000 | orchestrator | 2025-05-23 01:05:28 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:05:28.707186 | orchestrator | 2025-05-23 01:05:28 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:05:31.733693 | orchestrator | 2025-05-23 01:05:31 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:05:31.734082 | orchestrator | 2025-05-23 01:05:31 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:05:31.734590 | orchestrator | 2025-05-23 01:05:31 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:05:31.735045 | orchestrator | 2025-05-23 01:05:31 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:05:31.738513 | orchestrator | 2025-05-23 01:05:31 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:05:31.738546 | orchestrator | 2025-05-23 01:05:31 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:05:34.766508 | orchestrator | 2025-05-23 01:05:34 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:05:34.769197 | orchestrator | 2025-05-23 01:05:34 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:05:34.769225 | orchestrator | 2025-05-23 01:05:34 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:05:34.769237 | orchestrator | 2025-05-23 01:05:34 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:05:34.769248 | orchestrator | 2025-05-23 01:05:34 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:05:34.769259 | orchestrator | 2025-05-23 01:05:34 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:05:37.799136 | orchestrator | 2025-05-23 01:05:37 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:05:37.799319 | orchestrator | 2025-05-23 01:05:37 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:05:37.799730 | orchestrator | 2025-05-23 01:05:37 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:05:37.800153 | orchestrator | 2025-05-23 01:05:37 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:05:37.800680 | orchestrator | 2025-05-23 01:05:37 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:05:37.800704 | orchestrator | 2025-05-23 01:05:37 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:05:40.828439 | orchestrator | 2025-05-23 01:05:40 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:05:40.828610 | orchestrator | 2025-05-23 01:05:40 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:05:40.829203 | orchestrator | 2025-05-23 01:05:40 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:05:40.829699 | orchestrator | 2025-05-23 01:05:40 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:05:40.830485 | orchestrator | 2025-05-23 01:05:40 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:05:40.830523 | orchestrator | 2025-05-23 01:05:40 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:05:43.855361 | orchestrator | 2025-05-23 01:05:43 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:05:43.855462 | orchestrator | 2025-05-23 01:05:43 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:05:43.855711 | orchestrator | 2025-05-23 01:05:43 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:05:43.856036 | orchestrator | 2025-05-23 01:05:43 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:05:43.857531 | orchestrator | 2025-05-23 01:05:43 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:05:43.857552 | orchestrator | 2025-05-23 01:05:43 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:05:46.896300 | orchestrator | 2025-05-23 01:05:46 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:05:46.896528 | orchestrator | 2025-05-23 01:05:46 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:05:46.898608 | orchestrator | 2025-05-23 01:05:46 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:05:46.899088 | orchestrator | 2025-05-23 01:05:46 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:05:46.900269 | orchestrator | 2025-05-23 01:05:46 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:05:46.905116 | orchestrator | 2025-05-23 01:05:46 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:05:49.925160 | orchestrator | 2025-05-23 01:05:49 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:05:49.925363 | orchestrator | 2025-05-23 01:05:49 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:05:49.926368 | orchestrator | 2025-05-23 01:05:49 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:05:49.926602 | orchestrator | 2025-05-23 01:05:49 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:05:49.927356 | orchestrator | 2025-05-23 01:05:49 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:05:49.927418 | orchestrator | 2025-05-23 01:05:49 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:05:52.960954 | orchestrator | 2025-05-23 01:05:52 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:05:52.961252 | orchestrator | 2025-05-23 01:05:52 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:05:52.961731 | orchestrator | 2025-05-23 01:05:52 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:05:52.962439 | orchestrator | 2025-05-23 01:05:52 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:05:52.963077 | orchestrator | 2025-05-23 01:05:52 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:05:52.963109 | orchestrator | 2025-05-23 01:05:52 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:05:55.990735 | orchestrator | 2025-05-23 01:05:55 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:05:55.991003 | orchestrator | 2025-05-23 01:05:55 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:05:55.995137 | orchestrator | 2025-05-23 01:05:55 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:05:55.995174 | orchestrator | 2025-05-23 01:05:55 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:05:55.995186 | orchestrator | 2025-05-23 01:05:55 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:05:55.995197 | orchestrator | 2025-05-23 01:05:55 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:05:59.020023 | orchestrator | 2025-05-23 01:05:59 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:05:59.020417 | orchestrator | 2025-05-23 01:05:59 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:05:59.020846 | orchestrator | 2025-05-23 01:05:59 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:05:59.021555 | orchestrator | 2025-05-23 01:05:59 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:05:59.022831 | orchestrator | 2025-05-23 01:05:59 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:05:59.022853 | orchestrator | 2025-05-23 01:05:59 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:06:02.058719 | orchestrator | 2025-05-23 01:06:02 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:06:02.058812 | orchestrator | 2025-05-23 01:06:02 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:06:02.059656 | orchestrator | 2025-05-23 01:06:02 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:06:02.063509 | orchestrator | 2025-05-23 01:06:02 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:06:02.065400 | orchestrator | 2025-05-23 01:06:02 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:06:02.065694 | orchestrator | 2025-05-23 01:06:02 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:06:05.103833 | orchestrator | 2025-05-23 01:06:05 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:06:05.104976 | orchestrator | 2025-05-23 01:06:05 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:06:05.106673 | orchestrator | 2025-05-23 01:06:05 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:06:05.108074 | orchestrator | 2025-05-23 01:06:05 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:06:05.110333 | orchestrator | 2025-05-23 01:06:05 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:06:05.110382 | orchestrator | 2025-05-23 01:06:05 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:06:08.168185 | orchestrator | 2025-05-23 01:06:08 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:06:08.173307 | orchestrator | 2025-05-23 01:06:08 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:06:08.176232 | orchestrator | 2025-05-23 01:06:08 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:06:08.179110 | orchestrator | 2025-05-23 01:06:08 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:06:08.182827 | orchestrator | 2025-05-23 01:06:08 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:06:08.182863 | orchestrator | 2025-05-23 01:06:08 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:06:11.241613 | orchestrator | 2025-05-23 01:06:11 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:06:11.243072 | orchestrator | 2025-05-23 01:06:11 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:06:11.246846 | orchestrator | 2025-05-23 01:06:11 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:06:11.248017 | orchestrator | 2025-05-23 01:06:11 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:06:11.249428 | orchestrator | 2025-05-23 01:06:11 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:06:11.249460 | orchestrator | 2025-05-23 01:06:11 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:06:14.309373 | orchestrator | 2025-05-23 01:06:14 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:06:14.312674 | orchestrator | 2025-05-23 01:06:14 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:06:14.316117 | orchestrator | 2025-05-23 01:06:14 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:06:14.321504 | orchestrator | 2025-05-23 01:06:14 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:06:14.322618 | orchestrator | 2025-05-23 01:06:14 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:06:14.322722 | orchestrator | 2025-05-23 01:06:14 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:06:17.373985 | orchestrator | 2025-05-23 01:06:17 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:06:17.376606 | orchestrator | 2025-05-23 01:06:17 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:06:17.381287 | orchestrator | 2025-05-23 01:06:17 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:06:17.383180 | orchestrator | 2025-05-23 01:06:17 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:06:17.384842 | orchestrator | 2025-05-23 01:06:17 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:06:17.385238 | orchestrator | 2025-05-23 01:06:17 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:06:20.441435 | orchestrator | 2025-05-23 01:06:20 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:06:20.444474 | orchestrator | 2025-05-23 01:06:20 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:06:20.449436 | orchestrator | 2025-05-23 01:06:20 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:06:20.453936 | orchestrator | 2025-05-23 01:06:20 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:06:20.455434 | orchestrator | 2025-05-23 01:06:20 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:06:20.455465 | orchestrator | 2025-05-23 01:06:20 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:06:23.489243 | orchestrator | 2025-05-23 01:06:23 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:06:23.491659 | orchestrator | 2025-05-23 01:06:23 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:06:23.493932 | orchestrator | 2025-05-23 01:06:23 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:06:23.495373 | orchestrator | 2025-05-23 01:06:23 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:06:23.496976 | orchestrator | 2025-05-23 01:06:23 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:06:23.497102 | orchestrator | 2025-05-23 01:06:23 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:06:26.559045 | orchestrator | 2025-05-23 01:06:26 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:06:26.559156 | orchestrator | 2025-05-23 01:06:26 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:06:26.559171 | orchestrator | 2025-05-23 01:06:26 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:06:26.560550 | orchestrator | 2025-05-23 01:06:26 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:06:26.562455 | orchestrator | 2025-05-23 01:06:26 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:06:26.562581 | orchestrator | 2025-05-23 01:06:26 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:06:29.632707 | orchestrator | 2025-05-23 01:06:29 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:06:29.636496 | orchestrator | 2025-05-23 01:06:29 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:06:29.636549 | orchestrator | 2025-05-23 01:06:29 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:06:29.636728 | orchestrator | 2025-05-23 01:06:29 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:06:29.638381 | orchestrator | 2025-05-23 01:06:29 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:06:29.638471 | orchestrator | 2025-05-23 01:06:29 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:06:32.696471 | orchestrator | 2025-05-23 01:06:32 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:06:32.698124 | orchestrator | 2025-05-23 01:06:32 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:06:32.699601 | orchestrator | 2025-05-23 01:06:32 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:06:32.700817 | orchestrator | 2025-05-23 01:06:32 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:06:32.702527 | orchestrator | 2025-05-23 01:06:32 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:06:32.702562 | orchestrator | 2025-05-23 01:06:32 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:06:35.751527 | orchestrator | 2025-05-23 01:06:35 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:06:35.753248 | orchestrator | 2025-05-23 01:06:35 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:06:35.754929 | orchestrator | 2025-05-23 01:06:35 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:06:35.756220 | orchestrator | 2025-05-23 01:06:35 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:06:35.759158 | orchestrator | 2025-05-23 01:06:35 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:06:35.759229 | orchestrator | 2025-05-23 01:06:35 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:06:38.810608 | orchestrator | 2025-05-23 01:06:38 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:06:38.811612 | orchestrator | 2025-05-23 01:06:38 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:06:38.815027 | orchestrator | 2025-05-23 01:06:38 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:06:38.816630 | orchestrator | 2025-05-23 01:06:38 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:06:38.818608 | orchestrator | 2025-05-23 01:06:38 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:06:38.818661 | orchestrator | 2025-05-23 01:06:38 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:06:41.870813 | orchestrator | 2025-05-23 01:06:41 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:06:41.870967 | orchestrator | 2025-05-23 01:06:41 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:06:41.871487 | orchestrator | 2025-05-23 01:06:41 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:06:41.872458 | orchestrator | 2025-05-23 01:06:41 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state STARTED 2025-05-23 01:06:41.873244 | orchestrator | 2025-05-23 01:06:41 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:06:41.873547 | orchestrator | 2025-05-23 01:06:41 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:06:44.903440 | orchestrator | 2025-05-23 01:06:44 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:06:44.904416 | orchestrator | 2025-05-23 01:06:44 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:06:44.905344 | orchestrator | 2025-05-23 01:06:44 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:06:44.907989 | orchestrator | 2025-05-23 01:06:44 | INFO  | Task 5dd1c433-c8af-41a8-b98b-d3659ca154b2 is in state SUCCESS 2025-05-23 01:06:44.909663 | orchestrator | 2025-05-23 01:06:44.909692 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-05-23 01:06:44.909704 | orchestrator | 2025-05-23 01:06:44.909715 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-05-23 01:06:44.909726 | orchestrator | Friday 23 May 2025 00:58:48 +0000 (0:00:00.125) 0:00:00.125 ************ 2025-05-23 01:06:44.909737 | orchestrator | changed: [localhost] 2025-05-23 01:06:44.909748 | orchestrator | 2025-05-23 01:06:44.909759 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-05-23 01:06:44.909770 | orchestrator | Friday 23 May 2025 00:58:48 +0000 (0:00:00.489) 0:00:00.614 ************ 2025-05-23 01:06:44.909781 | orchestrator | 2025-05-23 01:06:44.909791 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-23 01:06:44.909802 | orchestrator | 2025-05-23 01:06:44.909813 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-23 01:06:44.909824 | orchestrator | 2025-05-23 01:06:44.909834 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-23 01:06:44.909897 | orchestrator | 2025-05-23 01:06:44.909909 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-23 01:06:44.909919 | orchestrator | 2025-05-23 01:06:44.909930 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-23 01:06:44.909941 | orchestrator | 2025-05-23 01:06:44.909952 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-23 01:06:44.909962 | orchestrator | 2025-05-23 01:06:44.909973 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-23 01:06:44.909984 | orchestrator | changed: [localhost] 2025-05-23 01:06:44.909994 | orchestrator | 2025-05-23 01:06:44.910005 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-05-23 01:06:44.910059 | orchestrator | Friday 23 May 2025 01:04:44 +0000 (0:05:55.135) 0:05:55.750 ************ 2025-05-23 01:06:44.910210 | orchestrator | changed: [localhost] 2025-05-23 01:06:44.910246 | orchestrator | 2025-05-23 01:06:44.910257 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-23 01:06:44.910268 | orchestrator | 2025-05-23 01:06:44.910280 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-23 01:06:44.910292 | orchestrator | Friday 23 May 2025 01:04:56 +0000 (0:00:12.698) 0:06:08.448 ************ 2025-05-23 01:06:44.910305 | orchestrator | ok: [testbed-node-0] 2025-05-23 01:06:44.910317 | orchestrator | ok: [testbed-node-1] 2025-05-23 01:06:44.910329 | orchestrator | ok: [testbed-node-2] 2025-05-23 01:06:44.910341 | orchestrator | 2025-05-23 01:06:44.910354 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-23 01:06:44.910366 | orchestrator | Friday 23 May 2025 01:04:57 +0000 (0:00:00.634) 0:06:09.083 ************ 2025-05-23 01:06:44.910378 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-05-23 01:06:44.910390 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-05-23 01:06:44.910402 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-05-23 01:06:44.910415 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-05-23 01:06:44.910427 | orchestrator | 2025-05-23 01:06:44.910439 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-05-23 01:06:44.910452 | orchestrator | skipping: no hosts matched 2025-05-23 01:06:44.910465 | orchestrator | 2025-05-23 01:06:44.910490 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 01:06:44.910503 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 01:06:44.910518 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 01:06:44.910531 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 01:06:44.910543 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 01:06:44.910555 | orchestrator | 2025-05-23 01:06:44.910568 | orchestrator | 2025-05-23 01:06:44.910580 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-23 01:06:44.910592 | orchestrator | Friday 23 May 2025 01:04:58 +0000 (0:00:01.373) 0:06:10.457 ************ 2025-05-23 01:06:44.910604 | orchestrator | =============================================================================== 2025-05-23 01:06:44.910616 | orchestrator | Download ironic-agent initramfs --------------------------------------- 355.14s 2025-05-23 01:06:44.910627 | orchestrator | Download ironic-agent kernel ------------------------------------------- 12.70s 2025-05-23 01:06:44.910637 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.37s 2025-05-23 01:06:44.910648 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.63s 2025-05-23 01:06:44.910658 | orchestrator | Ensure the destination directory exists --------------------------------- 0.49s 2025-05-23 01:06:44.910669 | orchestrator | 2025-05-23 01:06:44.910679 | orchestrator | 2025-05-23 01:06:44.910690 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-23 01:06:44.910700 | orchestrator | 2025-05-23 01:06:44.910710 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-23 01:06:44.910721 | orchestrator | Friday 23 May 2025 01:02:22 +0000 (0:00:00.299) 0:00:00.299 ************ 2025-05-23 01:06:44.910731 | orchestrator | ok: [testbed-manager] 2025-05-23 01:06:44.910742 | orchestrator | ok: [testbed-node-0] 2025-05-23 01:06:44.910752 | orchestrator | ok: [testbed-node-1] 2025-05-23 01:06:44.910763 | orchestrator | ok: [testbed-node-2] 2025-05-23 01:06:44.910773 | orchestrator | ok: [testbed-node-3] 2025-05-23 01:06:44.910784 | orchestrator | ok: [testbed-node-4] 2025-05-23 01:06:44.910794 | orchestrator | ok: [testbed-node-5] 2025-05-23 01:06:44.910805 | orchestrator | 2025-05-23 01:06:44.910822 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-23 01:06:44.910833 | orchestrator | Friday 23 May 2025 01:02:23 +0000 (0:00:00.825) 0:00:01.125 ************ 2025-05-23 01:06:44.910877 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-05-23 01:06:44.910890 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-05-23 01:06:44.910901 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-05-23 01:06:44.910911 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-05-23 01:06:44.910922 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-05-23 01:06:44.910933 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-05-23 01:06:44.910944 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-05-23 01:06:44.910954 | orchestrator | 2025-05-23 01:06:44.910966 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-05-23 01:06:44.910976 | orchestrator | 2025-05-23 01:06:44.911066 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-05-23 01:06:44.911079 | orchestrator | Friday 23 May 2025 01:02:24 +0000 (0:00:00.930) 0:00:02.055 ************ 2025-05-23 01:06:44.911090 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 01:06:44.911102 | orchestrator | 2025-05-23 01:06:44.911112 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-05-23 01:06:44.911123 | orchestrator | Friday 23 May 2025 01:02:25 +0000 (0:00:01.636) 0:00:03.692 ************ 2025-05-23 01:06:44.911137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-23 01:06:44.911157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-23 01:06:44.911169 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-23 01:06:44.911188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-23 01:06:44.911208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-23 01:06:44.911221 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-23 01:06:44.911232 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.911244 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.911259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 01:06:44.911271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 01:06:44.911283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-23 01:06:44.911306 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-23 01:06:44.911325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 01:06:44.911345 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-23 01:06:44.911365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-23 01:06:44.911392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 01:06:44.911414 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-23 01:06:44.911449 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-23 01:06:44.911462 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.911474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-23 01:06:44.911490 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-23 01:06:44.911502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-23 01:06:44.911520 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-23 01:06:44.911538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-23 01:06:44.911549 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.911561 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.911572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-23 01:06:44.911587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 01:06:44.911605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-23 01:06:44.911623 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 01:06:44.911635 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-23 01:06:44.911646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-23 01:06:44.911657 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.911673 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-23 01:06:44.911695 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-23 01:06:44.911707 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.911726 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.911738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 01:06:44.911749 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-23 01:06:44.911761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.911776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-23 01:06:44.911794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.911806 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-23 01:06:44.912304 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-23 01:06:44.912325 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.912336 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.912354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 01:06:44.912373 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-23 01:06:44.912385 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.912396 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.912452 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-23 01:06:44.912477 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-23 01:06:44.912495 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-23 01:06:44.912514 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.912526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 01:06:44.912537 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.912554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.912566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-23 01:06:44.912578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.912589 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-23 01:06:44.912610 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.912622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-23 01:06:44.912633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-23 01:06:44.912652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-23 01:06:44.912664 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-23 01:06:44.912726 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-23 01:06:44.912746 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-23 01:06:44.912757 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.912775 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.912787 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-23 01:06:44.912798 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.912809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 01:06:44.912831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.912931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-23 01:06:44.912947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.912960 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-23 01:06:44.912980 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.912994 | orchestrator | 2025-05-23 01:06:44.913006 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-05-23 01:06:44.913020 | orchestrator | Friday 23 May 2025 01:02:29 +0000 (0:00:04.098) 0:00:07.790 ************ 2025-05-23 01:06:44.913033 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 01:06:44.913046 | orchestrator | 2025-05-23 01:06:44.913058 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-05-23 01:06:44.913071 | orchestrator | Friday 23 May 2025 01:02:31 +0000 (0:00:01.994) 0:00:09.784 ************ 2025-05-23 01:06:44.913084 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-23 01:06:44.913106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-23 01:06:44.913126 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-23 01:06:44.913140 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-23 01:06:44.913152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-23 01:06:44.913171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-23 01:06:44.913184 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-23 01:06:44.913196 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-23 01:06:44.913220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 01:06:44.913237 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-23 01:06:44.913252 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-23 01:06:44.913265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 01:06:44.913276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 01:06:44.913292 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-23 01:06:44.913304 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-23 01:06:44.913320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 01:06:44.913330 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-23 01:06:44.913341 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-23 01:06:44.913349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 01:06:44.913357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 01:06:44.913365 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-23 01:06:44.913378 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-23 01:06:44.913392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-23 01:06:44.913400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-23 01:06:44.913411 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 01:06:44.913420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-23 01:06:44.913428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 01:06:44.913440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 01:06:44.913448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 01:06:44.913460 | orchestrator | 2025-05-23 01:06:44.913469 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-05-23 01:06:44.913476 | orchestrator | Friday 23 May 2025 01:02:39 +0000 (0:00:07.069) 0:00:16.854 ************ 2025-05-23 01:06:44.913485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-23 01:06:44.913493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.913504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.913512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-23 01:06:44.913520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.913529 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:06:44.913541 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-23 01:06:44.913554 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-23 01:06:44.913563 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-23 01:06:44.913575 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-23 01:06:44.913584 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.913592 | orchestrator | skipping: [testbed-manager] 2025-05-23 01:06:44.913600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-23 01:06:44.913608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.913624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.913633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-23 01:06:44.913641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.913649 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:06:44.913657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-23 01:06:44.913669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.913677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.913685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-23 01:06:44.913698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.913706 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:06:44.913718 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-23 01:06:44.913727 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-23 01:06:44.913735 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-23 01:06:44.913743 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:06:44.913751 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-23 01:06:44.913762 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-23 01:06:44.913771 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-23 01:06:44.913779 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:06:44.913789 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-23 01:06:44.913802 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-23 01:06:44.913811 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-23 01:06:44.913819 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:06:44.913826 | orchestrator | 2025-05-23 01:06:44.913834 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-05-23 01:06:44.913855 | orchestrator | Friday 23 May 2025 01:02:44 +0000 (0:00:05.159) 0:00:22.013 ************ 2025-05-23 01:06:44.913863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-23 01:06:44.913871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.913886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.913894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-23 01:06:44.913907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.913916 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:06:44.913929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-23 01:06:44.913938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.913946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-23 01:06:44.913954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.913963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.913974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.913982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-23 01:06:44.913996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-23 01:06:44.914008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.914045 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:06:44.914056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.914067 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:06:44.914076 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-23 01:06:44.914088 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-23 01:06:44.914096 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-23 01:06:44.914109 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-23 01:06:44.914530 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.914609 | orchestrator | skipping: [testbed-manager] 2025-05-23 01:06:44.914628 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-23 01:06:44.914641 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-23 01:06:44.914653 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-23 01:06:44.914664 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:06:44.914689 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-23 01:06:44.914719 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-23 01:06:44.914732 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-23 01:06:44.914743 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:06:44.914754 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-23 01:06:44.915219 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-23 01:06:44.915246 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-23 01:06:44.915258 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:06:44.915269 | orchestrator | 2025-05-23 01:06:44.915281 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-05-23 01:06:44.915292 | orchestrator | Friday 23 May 2025 01:02:48 +0000 (0:00:04.681) 0:00:26.694 ************ 2025-05-23 01:06:44.915304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-23 01:06:44.915337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-23 01:06:44.915349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-23 01:06:44.915372 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-23 01:06:44.915384 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-23 01:06:44.915395 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-23 01:06:44.915419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-23 01:06:44.915431 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-23 01:06:44.915442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-23 01:06:44.915460 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-23 01:06:44.915472 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.915484 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.915495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-23 01:06:44.915517 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-23 01:06:44.915528 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-23 01:06:44.915540 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.915551 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.915567 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.915579 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.915591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 01:06:44.915602 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-23 01:06:44.915622 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.915634 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.915645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 01:06:44.915657 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-23 01:06:44.915675 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-23 01:06:44.915689 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-23 01:06:44.915712 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-23 01:06:44.915724 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-23 01:06:44.915736 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.915747 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.915775 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-23 01:06:44.915788 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-23 01:06:44.915804 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.915820 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.915831 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-23 01:06:44.915885 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-23 01:06:44.915898 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-23 01:06:44.915921 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.916133 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.916152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 01:06:44.916164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 01:06:44.916175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 01:06:44.916196 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-23 01:06:44.916238 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-23 01:06:44.916259 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.916275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-23 01:06:44.916287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-23 01:06:44.916306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-23 01:06:44.916318 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-23 01:06:44.916337 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.916348 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-23 01:06:44.916364 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.916375 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-23 01:06:44.916387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 01:06:44.916398 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 01:06:44.916416 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.916434 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-23 01:06:44.916445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-23 01:06:44.916457 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.916472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-23 01:06:44.916484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-23 01:06:44.916502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 01:06:44.916520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.916542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-23 01:06:44.916554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.916569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-23 01:06:44.916581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 01:06:44.916593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-23 01:06:44.916750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-23 01:06:44.916766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.916777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-23 01:06:44.916794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.916806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 01:06:44.916817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.916828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-23 01:06:44.916870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.916882 | orchestrator | 2025-05-23 01:06:44.916894 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-05-23 01:06:44.916905 | orchestrator | Friday 23 May 2025 01:02:57 +0000 (0:00:08.800) 0:00:35.495 ************ 2025-05-23 01:06:44.916916 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-23 01:06:44.916926 | orchestrator | 2025-05-23 01:06:44.916937 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-05-23 01:06:44.916948 | orchestrator | Friday 23 May 2025 01:02:58 +0000 (0:00:00.562) 0:00:36.057 ************ 2025-05-23 01:06:44.916959 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1096592, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747959254.0524898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.916970 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1096592, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747959254.0524898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.916988 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1096592, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747959254.0524898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.916999 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1096592, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747959254.0524898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917011 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1096592, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747959254.0524898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917033 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1096614, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747959254.0544896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917044 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1096592, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747959254.0524898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917055 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1096614, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747959254.0544896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917067 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1096614, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747959254.0544896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917085 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1096614, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747959254.0544896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917096 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1096592, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747959254.0524898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-23 01:06:44.917107 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1096599, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747959254.0524898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917130 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1096614, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747959254.0544896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917142 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1096614, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747959254.0544896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917153 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1096599, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747959254.0524898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917164 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1096599, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747959254.0524898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917180 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1096599, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747959254.0524898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917191 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1096599, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747959254.0524898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917208 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1096610, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0544896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917225 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1096610, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0544896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917236 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1096610, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0544896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917247 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1096599, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747959254.0524898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917259 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1096610, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0544896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917275 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1096610, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0544896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917287 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096660, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.06549, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917303 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096660, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.06549, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917320 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1096610, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0544896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917332 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096660, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.06549, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917352 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096660, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.06549, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917364 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1096614, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747959254.0544896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-23 01:06:44.917379 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096660, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.06549, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917390 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1096638, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0614898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917408 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1096638, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0614898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917425 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1096638, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0614898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917437 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096660, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.06549, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917448 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1096638, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0614898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917459 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096606, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0534897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917474 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1096638, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0614898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917485 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096606, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0534897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917502 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096606, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0534897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917519 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1096638, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0614898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917530 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096606, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0534897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917541 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096606, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0534897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917552 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1096616, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0554898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917567 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1096616, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0554898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917584 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1096616, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0554898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917595 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1096616, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0554898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917612 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096606, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0534897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917623 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096658, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0644898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917634 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1096616, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0554898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917646 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096658, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0644898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917661 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096658, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0644898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917677 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1096599, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747959254.0524898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-23 01:06:44.917689 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096658, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0644898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917700 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096658, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0644898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917718 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096602, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0534897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917730 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1096616, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0554898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917741 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096602, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0534897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917756 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096602, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0534897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917776 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096602, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0534897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917788 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1096646, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747959254.0614898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.917799 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:06:44.918371 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096602, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0534897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.918403 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1096646, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747959254.0614898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.918421 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:06:44.918433 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1096646, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747959254.0614898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.918444 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:06:44.918455 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096658, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0644898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.918483 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1096646, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747959254.0614898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.918494 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:06:44.918506 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1096646, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747959254.0614898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.918517 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:06:44.918529 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096602, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0534897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.918548 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1096646, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747959254.0614898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-23 01:06:44.918560 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:06:44.918571 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1096610, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0544896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-23 01:06:44.918582 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096660, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.06549, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-23 01:06:44.918600 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1096638, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0614898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-23 01:06:44.918616 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096606, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0534897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-23 01:06:44.918628 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1096616, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0554898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-23 01:06:44.918639 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096658, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0644898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-23 01:06:44.918655 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096602, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0534897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-23 01:06:44.918666 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1096646, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747959254.0614898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-23 01:06:44.918678 | orchestrator | 2025-05-23 01:06:44.918689 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-05-23 01:06:44.918701 | orchestrator | Friday 23 May 2025 01:03:34 +0000 (0:00:36.676) 0:01:12.734 ************ 2025-05-23 01:06:44.918711 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-23 01:06:44.918722 | orchestrator | 2025-05-23 01:06:44.918733 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-05-23 01:06:44.918750 | orchestrator | Friday 23 May 2025 01:03:35 +0000 (0:00:00.488) 0:01:13.222 ************ 2025-05-23 01:06:44.918761 | orchestrator | [WARNING]: Skipped 2025-05-23 01:06:44.918772 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-23 01:06:44.918783 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-05-23 01:06:44.918794 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-23 01:06:44.918805 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-05-23 01:06:44.918816 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-23 01:06:44.918828 | orchestrator | [WARNING]: Skipped 2025-05-23 01:06:44.918900 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-23 01:06:44.918914 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-05-23 01:06:44.918926 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-23 01:06:44.918937 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-05-23 01:06:44.918949 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-23 01:06:44.918960 | orchestrator | [WARNING]: Skipped 2025-05-23 01:06:44.918973 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-23 01:06:44.918986 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-05-23 01:06:44.918999 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-23 01:06:44.919017 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-05-23 01:06:44.919030 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-23 01:06:44.919044 | orchestrator | [WARNING]: Skipped 2025-05-23 01:06:44.919057 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-23 01:06:44.919071 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-05-23 01:06:44.919084 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-23 01:06:44.919097 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-05-23 01:06:44.919110 | orchestrator | [WARNING]: Skipped 2025-05-23 01:06:44.919123 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-23 01:06:44.919135 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-05-23 01:06:44.919147 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-23 01:06:44.919160 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-05-23 01:06:44.919172 | orchestrator | [WARNING]: Skipped 2025-05-23 01:06:44.919185 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-23 01:06:44.919198 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-05-23 01:06:44.919211 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-23 01:06:44.919222 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-05-23 01:06:44.919234 | orchestrator | [WARNING]: Skipped 2025-05-23 01:06:44.919245 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-23 01:06:44.919256 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-05-23 01:06:44.919268 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-23 01:06:44.919279 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-05-23 01:06:44.919291 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-23 01:06:44.919302 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-23 01:06:44.919314 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-23 01:06:44.919325 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-23 01:06:44.919335 | orchestrator | 2025-05-23 01:06:44.919345 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-05-23 01:06:44.919361 | orchestrator | Friday 23 May 2025 01:03:36 +0000 (0:00:01.611) 0:01:14.834 ************ 2025-05-23 01:06:44.919371 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-23 01:06:44.919382 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:06:44.919392 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-23 01:06:44.919408 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:06:44.919418 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-23 01:06:44.919428 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:06:44.919438 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-23 01:06:44.919448 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:06:44.919458 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-23 01:06:44.919467 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:06:44.919477 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-23 01:06:44.919487 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:06:44.919497 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-05-23 01:06:44.919507 | orchestrator | 2025-05-23 01:06:44.919517 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-05-23 01:06:44.919526 | orchestrator | Friday 23 May 2025 01:03:51 +0000 (0:00:14.998) 0:01:29.833 ************ 2025-05-23 01:06:44.919536 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-23 01:06:44.919547 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:06:44.919556 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-23 01:06:44.919566 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:06:44.919576 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-23 01:06:44.919586 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:06:44.919596 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-23 01:06:44.919605 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:06:44.919615 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-23 01:06:44.919625 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:06:44.919635 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-23 01:06:44.919645 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:06:44.919655 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-05-23 01:06:44.919664 | orchestrator | 2025-05-23 01:06:44.919674 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-05-23 01:06:44.919684 | orchestrator | Friday 23 May 2025 01:03:57 +0000 (0:00:05.244) 0:01:35.077 ************ 2025-05-23 01:06:44.919694 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-23 01:06:44.919709 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:06:44.919719 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-23 01:06:44.919729 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:06:44.919739 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-23 01:06:44.919749 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-23 01:06:44.919764 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:06:44.919774 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:06:44.919784 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-23 01:06:44.919794 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:06:44.919804 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-23 01:06:44.919814 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:06:44.919824 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-05-23 01:06:44.919833 | orchestrator | 2025-05-23 01:06:44.919858 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-05-23 01:06:44.919868 | orchestrator | Friday 23 May 2025 01:04:01 +0000 (0:00:03.803) 0:01:38.880 ************ 2025-05-23 01:06:44.919878 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-23 01:06:44.919887 | orchestrator | 2025-05-23 01:06:44.919897 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-05-23 01:06:44.919907 | orchestrator | Friday 23 May 2025 01:04:01 +0000 (0:00:00.620) 0:01:39.500 ************ 2025-05-23 01:06:44.919916 | orchestrator | skipping: [testbed-manager] 2025-05-23 01:06:44.919926 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:06:44.919936 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:06:44.919946 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:06:44.919955 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:06:44.919965 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:06:44.919974 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:06:44.919984 | orchestrator | 2025-05-23 01:06:44.919994 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-05-23 01:06:44.920003 | orchestrator | Friday 23 May 2025 01:04:02 +0000 (0:00:00.841) 0:01:40.342 ************ 2025-05-23 01:06:44.920013 | orchestrator | skipping: [testbed-manager] 2025-05-23 01:06:44.920023 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:06:44.920039 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:06:44.920049 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:06:44.920058 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:06:44.920068 | orchestrator | changed: [testbed-node-1] 2025-05-23 01:06:44.920077 | orchestrator | changed: [testbed-node-2] 2025-05-23 01:06:44.920087 | orchestrator | 2025-05-23 01:06:44.920097 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-05-23 01:06:44.920106 | orchestrator | Friday 23 May 2025 01:04:06 +0000 (0:00:03.700) 0:01:44.042 ************ 2025-05-23 01:06:44.920116 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-23 01:06:44.920126 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-23 01:06:44.920135 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:06:44.920145 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:06:44.920155 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-23 01:06:44.920164 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:06:44.920174 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-23 01:06:44.920184 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:06:44.920194 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-23 01:06:44.920203 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:06:44.920213 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-23 01:06:44.920223 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:06:44.920232 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-23 01:06:44.920248 | orchestrator | skipping: [testbed-manager] 2025-05-23 01:06:44.920257 | orchestrator | 2025-05-23 01:06:44.920267 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-05-23 01:06:44.920277 | orchestrator | Friday 23 May 2025 01:04:09 +0000 (0:00:03.003) 0:01:47.046 ************ 2025-05-23 01:06:44.920286 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-23 01:06:44.920296 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:06:44.920306 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-23 01:06:44.920316 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:06:44.920326 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-23 01:06:44.920335 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:06:44.920345 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-23 01:06:44.920355 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:06:44.920368 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-23 01:06:44.920379 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:06:44.920388 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-23 01:06:44.920398 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:06:44.920408 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-05-23 01:06:44.920418 | orchestrator | 2025-05-23 01:06:44.920427 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-05-23 01:06:44.920437 | orchestrator | Friday 23 May 2025 01:04:13 +0000 (0:00:04.152) 0:01:51.198 ************ 2025-05-23 01:06:44.920447 | orchestrator | [WARNING]: Skipped 2025-05-23 01:06:44.920457 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-05-23 01:06:44.920466 | orchestrator | due to this access issue: 2025-05-23 01:06:44.920476 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-05-23 01:06:44.920486 | orchestrator | not a directory 2025-05-23 01:06:44.920496 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-23 01:06:44.920505 | orchestrator | 2025-05-23 01:06:44.920515 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-05-23 01:06:44.920525 | orchestrator | Friday 23 May 2025 01:04:15 +0000 (0:00:02.160) 0:01:53.359 ************ 2025-05-23 01:06:44.920534 | orchestrator | skipping: [testbed-manager] 2025-05-23 01:06:44.920544 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:06:44.920553 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:06:44.920563 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:06:44.920572 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:06:44.920582 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:06:44.920591 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:06:44.920601 | orchestrator | 2025-05-23 01:06:44.920611 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-05-23 01:06:44.920620 | orchestrator | Friday 23 May 2025 01:04:16 +0000 (0:00:00.836) 0:01:54.196 ************ 2025-05-23 01:06:44.920630 | orchestrator | skipping: [testbed-manager] 2025-05-23 01:06:44.920639 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:06:44.920649 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:06:44.920658 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:06:44.920668 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:06:44.920677 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:06:44.920687 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:06:44.920696 | orchestrator | 2025-05-23 01:06:44.920706 | orchestrator | TASK [prometheus : Copying over prometheus msteams config file] **************** 2025-05-23 01:06:44.920722 | orchestrator | Friday 23 May 2025 01:04:17 +0000 (0:00:01.015) 0:01:55.211 ************ 2025-05-23 01:06:44.920737 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-23 01:06:44.920748 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:06:44.920758 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-23 01:06:44.920768 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:06:44.920777 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-23 01:06:44.920787 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:06:44.920796 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-23 01:06:44.920806 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:06:44.920815 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-23 01:06:44.920825 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:06:44.920835 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-23 01:06:44.920858 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:06:44.920868 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-23 01:06:44.920877 | orchestrator | skipping: [testbed-manager] 2025-05-23 01:06:44.920887 | orchestrator | 2025-05-23 01:06:44.920897 | orchestrator | TASK [prometheus : Copying over prometheus msteams template file] ************** 2025-05-23 01:06:44.920906 | orchestrator | Friday 23 May 2025 01:04:21 +0000 (0:00:04.532) 0:01:59.744 ************ 2025-05-23 01:06:44.920916 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-23 01:06:44.920926 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:06:44.920935 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-23 01:06:44.920945 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:06:44.920954 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-23 01:06:44.920964 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:06:44.920973 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-23 01:06:44.920983 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:06:44.920992 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-23 01:06:44.921002 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:06:44.921011 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-23 01:06:44.921021 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:06:44.921030 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-23 01:06:44.921044 | orchestrator | skipping: [testbed-manager] 2025-05-23 01:06:44.921054 | orchestrator | 2025-05-23 01:06:44.921064 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-05-23 01:06:44.921073 | orchestrator | Friday 23 May 2025 01:04:25 +0000 (0:00:03.907) 0:02:03.652 ************ 2025-05-23 01:06:44.921084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-23 01:06:44.921102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-23 01:06:44.921119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-23 01:06:44.921131 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-23 01:06:44.921148 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-23 01:06:44.921159 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-23 01:06:44.921178 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-23 01:06:44.921195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-23 01:06:44.921206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-23 01:06:44.921217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-23 01:06:44.921227 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-23 01:06:44.921241 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.921252 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.921268 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-23 01:06:44.921279 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.921295 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.921306 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-23 01:06:44.921316 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.921327 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.921341 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-23 01:06:44.921357 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.921367 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.921378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 01:06:44.921393 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-23 01:06:44.921405 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-23 01:06:44.921420 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-23 01:06:44.921436 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.921447 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.921457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 01:06:44.921472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 01:06:44.921483 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-23 01:06:44.921493 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-23 01:06:44.921504 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-23 01:06:44.921519 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-23 01:06:44.921535 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-23 01:06:44.921553 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-23 01:06:44.921564 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-23 01:06:44.921574 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.921594 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.921605 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.921616 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.921632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 01:06:44.921643 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-23 01:06:44.921654 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.921665 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-23 01:06:44.921684 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-23 01:06:44.921696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 01:06:44.921706 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.921722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 01:06:44.921733 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-23 01:06:44.921743 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-23 01:06:44.921760 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.921777 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.921788 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 01:06:44.921799 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-23 01:06:44.921814 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.921825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-23 01:06:44.921836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-23 01:06:44.921870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-23 01:06:44.921881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-23 01:06:44.921897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-23 01:06:44.921908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-23 01:06:44.921919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-23 01:06:44.921939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-23 01:06:44.921950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-23 01:06:44.921960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 01:06:44.922039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.922054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-23 01:06:44.922064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.922080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 01:06:44.922095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.922105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-23 01:06:44.922115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.922130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-23 01:06:44.922140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.922150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-23 01:06:44.922166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-23 01:06:44.922176 | orchestrator | 2025-05-23 01:06:44.922185 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-05-23 01:06:44.922195 | orchestrator | Friday 23 May 2025 01:04:31 +0000 (0:00:05.324) 0:02:08.976 ************ 2025-05-23 01:06:44.922205 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-05-23 01:06:44.922215 | orchestrator | 2025-05-23 01:06:44.922224 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-23 01:06:44.922234 | orchestrator | Friday 23 May 2025 01:04:33 +0000 (0:00:02.601) 0:02:11.578 ************ 2025-05-23 01:06:44.922244 | orchestrator | 2025-05-23 01:06:44.922253 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-23 01:06:44.922263 | orchestrator | Friday 23 May 2025 01:04:33 +0000 (0:00:00.051) 0:02:11.629 ************ 2025-05-23 01:06:44.922272 | orchestrator | 2025-05-23 01:06:44.922282 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-23 01:06:44.922291 | orchestrator | Friday 23 May 2025 01:04:33 +0000 (0:00:00.185) 0:02:11.814 ************ 2025-05-23 01:06:44.922300 | orchestrator | 2025-05-23 01:06:44.922310 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-23 01:06:44.922319 | orchestrator | Friday 23 May 2025 01:04:34 +0000 (0:00:00.050) 0:02:11.865 ************ 2025-05-23 01:06:44.922329 | orchestrator | 2025-05-23 01:06:44.922338 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-23 01:06:44.922348 | orchestrator | Friday 23 May 2025 01:04:34 +0000 (0:00:00.059) 0:02:11.924 ************ 2025-05-23 01:06:44.922357 | orchestrator | 2025-05-23 01:06:44.922367 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-23 01:06:44.922377 | orchestrator | Friday 23 May 2025 01:04:34 +0000 (0:00:00.066) 0:02:11.991 ************ 2025-05-23 01:06:44.922386 | orchestrator | 2025-05-23 01:06:44.922395 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-23 01:06:44.922405 | orchestrator | Friday 23 May 2025 01:04:34 +0000 (0:00:00.237) 0:02:12.228 ************ 2025-05-23 01:06:44.922414 | orchestrator | 2025-05-23 01:06:44.922424 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-05-23 01:06:44.922458 | orchestrator | Friday 23 May 2025 01:04:34 +0000 (0:00:00.087) 0:02:12.316 ************ 2025-05-23 01:06:44.922468 | orchestrator | changed: [testbed-manager] 2025-05-23 01:06:44.922478 | orchestrator | 2025-05-23 01:06:44.922487 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-05-23 01:06:44.922496 | orchestrator | Friday 23 May 2025 01:04:52 +0000 (0:00:18.052) 0:02:30.369 ************ 2025-05-23 01:06:44.922505 | orchestrator | changed: [testbed-node-3] 2025-05-23 01:06:44.922515 | orchestrator | changed: [testbed-node-2] 2025-05-23 01:06:44.922524 | orchestrator | changed: [testbed-node-4] 2025-05-23 01:06:44.922534 | orchestrator | changed: [testbed-node-1] 2025-05-23 01:06:44.922543 | orchestrator | changed: [testbed-node-5] 2025-05-23 01:06:44.922552 | orchestrator | changed: [testbed-manager] 2025-05-23 01:06:44.922561 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:06:44.922570 | orchestrator | 2025-05-23 01:06:44.922580 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-05-23 01:06:44.922595 | orchestrator | Friday 23 May 2025 01:05:15 +0000 (0:00:22.522) 0:02:52.891 ************ 2025-05-23 01:06:44.922604 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:06:44.922613 | orchestrator | changed: [testbed-node-1] 2025-05-23 01:06:44.922623 | orchestrator | changed: [testbed-node-2] 2025-05-23 01:06:44.922632 | orchestrator | 2025-05-23 01:06:44.922642 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-05-23 01:06:44.922651 | orchestrator | Friday 23 May 2025 01:05:29 +0000 (0:00:14.262) 0:03:07.154 ************ 2025-05-23 01:06:44.922661 | orchestrator | changed: [testbed-node-1] 2025-05-23 01:06:44.922675 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:06:44.922685 | orchestrator | changed: [testbed-node-2] 2025-05-23 01:06:44.922695 | orchestrator | 2025-05-23 01:06:44.922704 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-05-23 01:06:44.922714 | orchestrator | Friday 23 May 2025 01:05:43 +0000 (0:00:14.566) 0:03:21.720 ************ 2025-05-23 01:06:44.922723 | orchestrator | changed: [testbed-node-5] 2025-05-23 01:06:44.922733 | orchestrator | changed: [testbed-node-1] 2025-05-23 01:06:44.922742 | orchestrator | changed: [testbed-manager] 2025-05-23 01:06:44.922751 | orchestrator | changed: [testbed-node-2] 2025-05-23 01:06:44.922760 | orchestrator | changed: [testbed-node-4] 2025-05-23 01:06:44.922770 | orchestrator | changed: [testbed-node-3] 2025-05-23 01:06:44.922779 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:06:44.922788 | orchestrator | 2025-05-23 01:06:44.922798 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-05-23 01:06:44.922807 | orchestrator | Friday 23 May 2025 01:06:03 +0000 (0:00:19.551) 0:03:41.271 ************ 2025-05-23 01:06:44.922817 | orchestrator | changed: [testbed-manager] 2025-05-23 01:06:44.922826 | orchestrator | 2025-05-23 01:06:44.922836 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-05-23 01:06:44.922888 | orchestrator | Friday 23 May 2025 01:06:13 +0000 (0:00:09.896) 0:03:51.168 ************ 2025-05-23 01:06:44.922897 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:06:44.922907 | orchestrator | changed: [testbed-node-1] 2025-05-23 01:06:44.922916 | orchestrator | changed: [testbed-node-2] 2025-05-23 01:06:44.922926 | orchestrator | 2025-05-23 01:06:44.922935 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-05-23 01:06:44.922944 | orchestrator | Friday 23 May 2025 01:06:20 +0000 (0:00:07.191) 0:03:58.360 ************ 2025-05-23 01:06:44.922954 | orchestrator | changed: [testbed-manager] 2025-05-23 01:06:44.922963 | orchestrator | 2025-05-23 01:06:44.922972 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-05-23 01:06:44.922982 | orchestrator | Friday 23 May 2025 01:06:29 +0000 (0:00:08.743) 0:04:07.104 ************ 2025-05-23 01:06:44.922991 | orchestrator | changed: [testbed-node-3] 2025-05-23 01:06:44.923001 | orchestrator | changed: [testbed-node-4] 2025-05-23 01:06:44.923010 | orchestrator | changed: [testbed-node-5] 2025-05-23 01:06:44.923019 | orchestrator | 2025-05-23 01:06:44.923028 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 01:06:44.923038 | orchestrator | testbed-manager : ok=24  changed=15  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-05-23 01:06:44.923048 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-05-23 01:06:44.923058 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-05-23 01:06:44.923071 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-05-23 01:06:44.923081 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-23 01:06:44.923096 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-23 01:06:44.923106 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-23 01:06:44.923116 | orchestrator | 2025-05-23 01:06:44.923125 | orchestrator | 2025-05-23 01:06:44.923135 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-23 01:06:44.923144 | orchestrator | Friday 23 May 2025 01:06:41 +0000 (0:00:12.401) 0:04:19.506 ************ 2025-05-23 01:06:44.923154 | orchestrator | =============================================================================== 2025-05-23 01:06:44.923163 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 36.68s 2025-05-23 01:06:44.923173 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 22.52s 2025-05-23 01:06:44.923182 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 19.55s 2025-05-23 01:06:44.923191 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 18.05s 2025-05-23 01:06:44.923201 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 15.00s 2025-05-23 01:06:44.923210 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 14.57s 2025-05-23 01:06:44.923219 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 14.26s 2025-05-23 01:06:44.923229 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 12.40s 2025-05-23 01:06:44.923238 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 9.90s 2025-05-23 01:06:44.923248 | orchestrator | prometheus : Copying over config.json files ----------------------------- 8.80s 2025-05-23 01:06:44.923257 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 8.74s 2025-05-23 01:06:44.923266 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 7.19s 2025-05-23 01:06:44.923276 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 7.07s 2025-05-23 01:06:44.923285 | orchestrator | prometheus : Check prometheus containers -------------------------------- 5.32s 2025-05-23 01:06:44.923300 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 5.24s 2025-05-23 01:06:44.923308 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 5.16s 2025-05-23 01:06:44.923316 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 4.68s 2025-05-23 01:06:44.923323 | orchestrator | prometheus : Copying over prometheus msteams config file ---------------- 4.53s 2025-05-23 01:06:44.923332 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 4.15s 2025-05-23 01:06:44.923340 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.10s 2025-05-23 01:06:44.923347 | orchestrator | 2025-05-23 01:06:44 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:06:44.923356 | orchestrator | 2025-05-23 01:06:44 | INFO  | Task 19898d2f-a9f5-4e6a-a6d3-a5c54ef86506 is in state STARTED 2025-05-23 01:06:44.923364 | orchestrator | 2025-05-23 01:06:44 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:06:47.936403 | orchestrator | 2025-05-23 01:06:47 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:06:47.936493 | orchestrator | 2025-05-23 01:06:47 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:06:47.937016 | orchestrator | 2025-05-23 01:06:47 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:06:47.937436 | orchestrator | 2025-05-23 01:06:47 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:06:47.937758 | orchestrator | 2025-05-23 01:06:47 | INFO  | Task 19898d2f-a9f5-4e6a-a6d3-a5c54ef86506 is in state STARTED 2025-05-23 01:06:47.937813 | orchestrator | 2025-05-23 01:06:47 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:06:50.983234 | orchestrator | 2025-05-23 01:06:50 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:06:50.984404 | orchestrator | 2025-05-23 01:06:50 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:06:50.986062 | orchestrator | 2025-05-23 01:06:50 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:06:50.988768 | orchestrator | 2025-05-23 01:06:50 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:06:50.990606 | orchestrator | 2025-05-23 01:06:50 | INFO  | Task 19898d2f-a9f5-4e6a-a6d3-a5c54ef86506 is in state STARTED 2025-05-23 01:06:50.990683 | orchestrator | 2025-05-23 01:06:50 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:06:54.028397 | orchestrator | 2025-05-23 01:06:54 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:06:54.028890 | orchestrator | 2025-05-23 01:06:54 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:06:54.029199 | orchestrator | 2025-05-23 01:06:54 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:06:54.031082 | orchestrator | 2025-05-23 01:06:54 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:06:54.031103 | orchestrator | 2025-05-23 01:06:54 | INFO  | Task 19898d2f-a9f5-4e6a-a6d3-a5c54ef86506 is in state STARTED 2025-05-23 01:06:54.031114 | orchestrator | 2025-05-23 01:06:54 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:06:57.070622 | orchestrator | 2025-05-23 01:06:57 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:06:57.070728 | orchestrator | 2025-05-23 01:06:57 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:06:57.071484 | orchestrator | 2025-05-23 01:06:57 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:06:57.072728 | orchestrator | 2025-05-23 01:06:57 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:06:57.073540 | orchestrator | 2025-05-23 01:06:57 | INFO  | Task 19898d2f-a9f5-4e6a-a6d3-a5c54ef86506 is in state STARTED 2025-05-23 01:06:57.074382 | orchestrator | 2025-05-23 01:06:57 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:07:00.119300 | orchestrator | 2025-05-23 01:07:00 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:07:00.120926 | orchestrator | 2025-05-23 01:07:00 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:07:00.123363 | orchestrator | 2025-05-23 01:07:00 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:07:00.125293 | orchestrator | 2025-05-23 01:07:00 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:07:00.127308 | orchestrator | 2025-05-23 01:07:00 | INFO  | Task 19898d2f-a9f5-4e6a-a6d3-a5c54ef86506 is in state STARTED 2025-05-23 01:07:00.127342 | orchestrator | 2025-05-23 01:07:00 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:07:03.184621 | orchestrator | 2025-05-23 01:07:03 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:07:03.185753 | orchestrator | 2025-05-23 01:07:03 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:07:03.187798 | orchestrator | 2025-05-23 01:07:03 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:07:03.189117 | orchestrator | 2025-05-23 01:07:03 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:07:03.190152 | orchestrator | 2025-05-23 01:07:03 | INFO  | Task 19898d2f-a9f5-4e6a-a6d3-a5c54ef86506 is in state STARTED 2025-05-23 01:07:03.190371 | orchestrator | 2025-05-23 01:07:03 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:07:06.242281 | orchestrator | 2025-05-23 01:07:06 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:07:06.244559 | orchestrator | 2025-05-23 01:07:06 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:07:06.246381 | orchestrator | 2025-05-23 01:07:06 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:07:06.248597 | orchestrator | 2025-05-23 01:07:06 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:07:06.250168 | orchestrator | 2025-05-23 01:07:06 | INFO  | Task 19898d2f-a9f5-4e6a-a6d3-a5c54ef86506 is in state STARTED 2025-05-23 01:07:06.250432 | orchestrator | 2025-05-23 01:07:06 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:07:09.301852 | orchestrator | 2025-05-23 01:07:09 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:07:09.302628 | orchestrator | 2025-05-23 01:07:09 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:07:09.304709 | orchestrator | 2025-05-23 01:07:09 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:07:09.305886 | orchestrator | 2025-05-23 01:07:09 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:07:09.307403 | orchestrator | 2025-05-23 01:07:09 | INFO  | Task 19898d2f-a9f5-4e6a-a6d3-a5c54ef86506 is in state STARTED 2025-05-23 01:07:09.307477 | orchestrator | 2025-05-23 01:07:09 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:07:12.377902 | orchestrator | 2025-05-23 01:07:12 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:07:12.378012 | orchestrator | 2025-05-23 01:07:12 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:07:12.378670 | orchestrator | 2025-05-23 01:07:12 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:07:12.379472 | orchestrator | 2025-05-23 01:07:12 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:07:12.381310 | orchestrator | 2025-05-23 01:07:12 | INFO  | Task 19898d2f-a9f5-4e6a-a6d3-a5c54ef86506 is in state STARTED 2025-05-23 01:07:12.381335 | orchestrator | 2025-05-23 01:07:12 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:07:15.438707 | orchestrator | 2025-05-23 01:07:15 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:07:15.440550 | orchestrator | 2025-05-23 01:07:15 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:07:15.442253 | orchestrator | 2025-05-23 01:07:15 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:07:15.443924 | orchestrator | 2025-05-23 01:07:15 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:07:15.448584 | orchestrator | 2025-05-23 01:07:15 | INFO  | Task 19898d2f-a9f5-4e6a-a6d3-a5c54ef86506 is in state STARTED 2025-05-23 01:07:15.448618 | orchestrator | 2025-05-23 01:07:15 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:07:18.503524 | orchestrator | 2025-05-23 01:07:18 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state STARTED 2025-05-23 01:07:18.504062 | orchestrator | 2025-05-23 01:07:18 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:07:18.505438 | orchestrator | 2025-05-23 01:07:18 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:07:18.507849 | orchestrator | 2025-05-23 01:07:18 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:07:18.509076 | orchestrator | 2025-05-23 01:07:18 | INFO  | Task 19898d2f-a9f5-4e6a-a6d3-a5c54ef86506 is in state STARTED 2025-05-23 01:07:18.509105 | orchestrator | 2025-05-23 01:07:18 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:07:21.554438 | orchestrator | 2025-05-23 01:07:21 | INFO  | Task fde95b10-9b9e-451f-8f7b-305de914281b is in state SUCCESS 2025-05-23 01:07:21.556853 | orchestrator | 2025-05-23 01:07:21.556894 | orchestrator | 2025-05-23 01:07:21.556909 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-23 01:07:21.556921 | orchestrator | 2025-05-23 01:07:21.556933 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-23 01:07:21.556944 | orchestrator | Friday 23 May 2025 01:03:46 +0000 (0:00:00.249) 0:00:00.249 ************ 2025-05-23 01:07:21.556956 | orchestrator | ok: [testbed-node-0] 2025-05-23 01:07:21.556968 | orchestrator | ok: [testbed-node-1] 2025-05-23 01:07:21.556979 | orchestrator | ok: [testbed-node-2] 2025-05-23 01:07:21.556990 | orchestrator | 2025-05-23 01:07:21.557001 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-23 01:07:21.557012 | orchestrator | Friday 23 May 2025 01:03:46 +0000 (0:00:00.309) 0:00:00.559 ************ 2025-05-23 01:07:21.557024 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-05-23 01:07:21.557086 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-05-23 01:07:21.557101 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-05-23 01:07:21.557112 | orchestrator | 2025-05-23 01:07:21.557123 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-05-23 01:07:21.557134 | orchestrator | 2025-05-23 01:07:21.557145 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-23 01:07:21.557156 | orchestrator | Friday 23 May 2025 01:03:46 +0000 (0:00:00.255) 0:00:00.815 ************ 2025-05-23 01:07:21.557289 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 01:07:21.557302 | orchestrator | 2025-05-23 01:07:21.557313 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-05-23 01:07:21.557324 | orchestrator | Friday 23 May 2025 01:03:47 +0000 (0:00:00.591) 0:00:01.406 ************ 2025-05-23 01:07:21.557334 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-05-23 01:07:21.557345 | orchestrator | 2025-05-23 01:07:21.557356 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-05-23 01:07:21.557566 | orchestrator | Friday 23 May 2025 01:03:50 +0000 (0:00:03.096) 0:00:04.503 ************ 2025-05-23 01:07:21.557577 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-05-23 01:07:21.557589 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-05-23 01:07:21.557600 | orchestrator | 2025-05-23 01:07:21.557610 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-05-23 01:07:21.557621 | orchestrator | Friday 23 May 2025 01:03:57 +0000 (0:00:06.483) 0:00:10.986 ************ 2025-05-23 01:07:21.557649 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-23 01:07:21.557661 | orchestrator | 2025-05-23 01:07:21.557672 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-05-23 01:07:21.557683 | orchestrator | Friday 23 May 2025 01:04:00 +0000 (0:00:03.255) 0:00:14.242 ************ 2025-05-23 01:07:21.557694 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-23 01:07:21.557727 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-05-23 01:07:21.557738 | orchestrator | 2025-05-23 01:07:21.557749 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-05-23 01:07:21.557760 | orchestrator | Friday 23 May 2025 01:04:04 +0000 (0:00:03.902) 0:00:18.144 ************ 2025-05-23 01:07:21.557771 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-23 01:07:21.557781 | orchestrator | 2025-05-23 01:07:21.557815 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-05-23 01:07:21.557827 | orchestrator | Friday 23 May 2025 01:04:07 +0000 (0:00:03.402) 0:00:21.547 ************ 2025-05-23 01:07:21.557838 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-05-23 01:07:21.557848 | orchestrator | 2025-05-23 01:07:21.557859 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-05-23 01:07:21.557870 | orchestrator | Friday 23 May 2025 01:04:11 +0000 (0:00:04.192) 0:00:25.739 ************ 2025-05-23 01:07:21.557902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-23 01:07:21.557924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-23 01:07:21.557946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-23 01:07:21.557975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-23 01:07:21.557995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-23 01:07:21.558074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-23 01:07:21.558092 | orchestrator | 2025-05-23 01:07:21.558104 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-23 01:07:21.558115 | orchestrator | Friday 23 May 2025 01:04:15 +0000 (0:00:04.030) 0:00:29.770 ************ 2025-05-23 01:07:21.558133 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 01:07:21.558145 | orchestrator | 2025-05-23 01:07:21.558155 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-05-23 01:07:21.558166 | orchestrator | Friday 23 May 2025 01:04:16 +0000 (0:00:00.528) 0:00:30.298 ************ 2025-05-23 01:07:21.558183 | orchestrator | changed: [testbed-node-1] 2025-05-23 01:07:21.558196 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:07:21.558209 | orchestrator | changed: [testbed-node-2] 2025-05-23 01:07:21.558222 | orchestrator | 2025-05-23 01:07:21.558234 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-05-23 01:07:21.558246 | orchestrator | Friday 23 May 2025 01:04:26 +0000 (0:00:10.189) 0:00:40.488 ************ 2025-05-23 01:07:21.558259 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-23 01:07:21.558272 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-23 01:07:21.558284 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-23 01:07:21.558297 | orchestrator | 2025-05-23 01:07:21.558309 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-05-23 01:07:21.558321 | orchestrator | Friday 23 May 2025 01:04:28 +0000 (0:00:02.138) 0:00:42.627 ************ 2025-05-23 01:07:21.558333 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-23 01:07:21.558345 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-23 01:07:21.558357 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-23 01:07:21.558370 | orchestrator | 2025-05-23 01:07:21.558382 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-05-23 01:07:21.558395 | orchestrator | Friday 23 May 2025 01:04:29 +0000 (0:00:01.105) 0:00:43.732 ************ 2025-05-23 01:07:21.558408 | orchestrator | ok: [testbed-node-0] 2025-05-23 01:07:21.558420 | orchestrator | ok: [testbed-node-1] 2025-05-23 01:07:21.558432 | orchestrator | ok: [testbed-node-2] 2025-05-23 01:07:21.558446 | orchestrator | 2025-05-23 01:07:21.558458 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-05-23 01:07:21.558471 | orchestrator | Friday 23 May 2025 01:04:30 +0000 (0:00:00.715) 0:00:44.447 ************ 2025-05-23 01:07:21.558484 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:07:21.558496 | orchestrator | 2025-05-23 01:07:21.558509 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-05-23 01:07:21.558522 | orchestrator | Friday 23 May 2025 01:04:30 +0000 (0:00:00.122) 0:00:44.570 ************ 2025-05-23 01:07:21.558534 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:07:21.558545 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:07:21.558555 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:07:21.558566 | orchestrator | 2025-05-23 01:07:21.558577 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-23 01:07:21.558588 | orchestrator | Friday 23 May 2025 01:04:31 +0000 (0:00:00.318) 0:00:44.888 ************ 2025-05-23 01:07:21.558598 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 01:07:21.558609 | orchestrator | 2025-05-23 01:07:21.558619 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-05-23 01:07:21.558630 | orchestrator | Friday 23 May 2025 01:04:31 +0000 (0:00:00.568) 0:00:45.456 ************ 2025-05-23 01:07:21.558651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-23 01:07:21.558677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-23 01:07:21.558699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-23 01:07:21.558720 | orchestrator | 2025-05-23 01:07:21.558731 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-05-23 01:07:21.558742 | orchestrator | Friday 23 May 2025 01:04:35 +0000 (0:00:03.743) 0:00:49.200 ************ 2025-05-23 01:07:21.558758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-23 01:07:21.558771 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:07:21.558809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-23 01:07:21.558830 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:07:21.558852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-23 01:07:21.558864 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:07:21.558875 | orchestrator | 2025-05-23 01:07:21.558886 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-05-23 01:07:21.558897 | orchestrator | Friday 23 May 2025 01:04:39 +0000 (0:00:03.818) 0:00:53.019 ************ 2025-05-23 01:07:21.558916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-23 01:07:21.558935 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:07:21.558952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-23 01:07:21.558964 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:07:21.558976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-23 01:07:21.558988 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:07:21.559005 | orchestrator | 2025-05-23 01:07:21.559015 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-05-23 01:07:21.559026 | orchestrator | Friday 23 May 2025 01:04:44 +0000 (0:00:05.533) 0:00:58.552 ************ 2025-05-23 01:07:21.559037 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:07:21.559048 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:07:21.559059 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:07:21.559070 | orchestrator | 2025-05-23 01:07:21.559086 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-05-23 01:07:21.559098 | orchestrator | Friday 23 May 2025 01:04:48 +0000 (0:00:04.049) 0:01:02.602 ************ 2025-05-23 01:07:21.559114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-23 01:07:21.559127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-23 01:07:21.559155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-23 01:07:21.559174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-23 01:07:21.559194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-23 01:07:21.559218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-23 01:07:21.559231 | orchestrator | 2025-05-23 01:07:21.559242 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-05-23 01:07:21.559253 | orchestrator | Friday 23 May 2025 01:04:56 +0000 (0:00:07.594) 0:01:10.197 ************ 2025-05-23 01:07:21.559264 | orchestrator | changed: [testbed-node-1] 2025-05-23 01:07:21.559275 | orchestrator | changed: [testbed-node-2] 2025-05-23 01:07:21.559285 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:07:21.559296 | orchestrator | 2025-05-23 01:07:21.559307 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-05-23 01:07:21.559324 | orchestrator | Friday 23 May 2025 01:05:13 +0000 (0:00:17.185) 0:01:27.383 ************ 2025-05-23 01:07:21.559335 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:07:21.559476 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:07:21.559500 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:07:21.559517 | orchestrator | 2025-05-23 01:07:21.559535 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-05-23 01:07:21.559555 | orchestrator | Friday 23 May 2025 01:05:22 +0000 (0:00:08.978) 0:01:36.361 ************ 2025-05-23 01:07:21.559574 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:07:21.559592 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:07:21.559610 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:07:21.559621 | orchestrator | 2025-05-23 01:07:21.559632 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-05-23 01:07:21.559642 | orchestrator | Friday 23 May 2025 01:05:31 +0000 (0:00:09.217) 0:01:45.578 ************ 2025-05-23 01:07:21.559653 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:07:21.559663 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:07:21.559674 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:07:21.559684 | orchestrator | 2025-05-23 01:07:21.559694 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-05-23 01:07:21.559705 | orchestrator | Friday 23 May 2025 01:05:43 +0000 (0:00:11.401) 0:01:56.979 ************ 2025-05-23 01:07:21.559715 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:07:21.559736 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:07:21.559747 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:07:21.559758 | orchestrator | 2025-05-23 01:07:21.559768 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-05-23 01:07:21.559779 | orchestrator | Friday 23 May 2025 01:05:57 +0000 (0:00:14.312) 0:02:11.292 ************ 2025-05-23 01:07:21.559850 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:07:21.559864 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:07:21.559875 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:07:21.559886 | orchestrator | 2025-05-23 01:07:21.559896 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-05-23 01:07:21.559907 | orchestrator | Friday 23 May 2025 01:05:57 +0000 (0:00:00.244) 0:02:11.537 ************ 2025-05-23 01:07:21.559917 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-23 01:07:21.559928 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:07:21.559939 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-23 01:07:21.559950 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:07:21.559960 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-23 01:07:21.559971 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:07:21.559981 | orchestrator | 2025-05-23 01:07:21.559992 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-05-23 01:07:21.560003 | orchestrator | Friday 23 May 2025 01:06:00 +0000 (0:00:03.080) 0:02:14.617 ************ 2025-05-23 01:07:21.560023 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-23 01:07:21.560055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-23 01:07:21.560073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-23 01:07:21.560100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-23 01:07:21.560120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-23 01:07:21.560147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-23 01:07:21.560161 | orchestrator | 2025-05-23 01:07:21.560173 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-23 01:07:21.560185 | orchestrator | Friday 23 May 2025 01:06:04 +0000 (0:00:03.324) 0:02:17.942 ************ 2025-05-23 01:07:21.560197 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:07:21.560209 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:07:21.560221 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:07:21.560234 | orchestrator | 2025-05-23 01:07:21.560252 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-05-23 01:07:21.560265 | orchestrator | Friday 23 May 2025 01:06:04 +0000 (0:00:00.347) 0:02:18.290 ************ 2025-05-23 01:07:21.560277 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:07:21.560288 | orchestrator | 2025-05-23 01:07:21.560301 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-05-23 01:07:21.560313 | orchestrator | Friday 23 May 2025 01:06:06 +0000 (0:00:02.038) 0:02:20.328 ************ 2025-05-23 01:07:21.560325 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:07:21.560337 | orchestrator | 2025-05-23 01:07:21.560348 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-05-23 01:07:21.560360 | orchestrator | Friday 23 May 2025 01:06:08 +0000 (0:00:02.191) 0:02:22.520 ************ 2025-05-23 01:07:21.560372 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:07:21.560385 | orchestrator | 2025-05-23 01:07:21.560397 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-05-23 01:07:21.560410 | orchestrator | Friday 23 May 2025 01:06:10 +0000 (0:00:02.160) 0:02:24.680 ************ 2025-05-23 01:07:21.560422 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:07:21.560434 | orchestrator | 2025-05-23 01:07:21.560447 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-05-23 01:07:21.560460 | orchestrator | Friday 23 May 2025 01:06:37 +0000 (0:00:26.890) 0:02:51.570 ************ 2025-05-23 01:07:21.560472 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:07:21.560482 | orchestrator | 2025-05-23 01:07:21.560493 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-23 01:07:21.560511 | orchestrator | Friday 23 May 2025 01:06:39 +0000 (0:00:02.271) 0:02:53.842 ************ 2025-05-23 01:07:21.560522 | orchestrator | 2025-05-23 01:07:21.560533 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-23 01:07:21.560543 | orchestrator | Friday 23 May 2025 01:06:40 +0000 (0:00:00.059) 0:02:53.901 ************ 2025-05-23 01:07:21.560554 | orchestrator | 2025-05-23 01:07:21.560564 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-23 01:07:21.560575 | orchestrator | Friday 23 May 2025 01:06:40 +0000 (0:00:00.055) 0:02:53.956 ************ 2025-05-23 01:07:21.560586 | orchestrator | 2025-05-23 01:07:21.560596 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-05-23 01:07:21.560607 | orchestrator | Friday 23 May 2025 01:06:40 +0000 (0:00:00.248) 0:02:54.205 ************ 2025-05-23 01:07:21.560617 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:07:21.560628 | orchestrator | changed: [testbed-node-1] 2025-05-23 01:07:21.560639 | orchestrator | changed: [testbed-node-2] 2025-05-23 01:07:21.560649 | orchestrator | 2025-05-23 01:07:21.560660 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 01:07:21.560676 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-05-23 01:07:21.560688 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-23 01:07:21.560699 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-23 01:07:21.560709 | orchestrator | 2025-05-23 01:07:21.560720 | orchestrator | 2025-05-23 01:07:21.560731 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-23 01:07:21.560741 | orchestrator | Friday 23 May 2025 01:07:18 +0000 (0:00:38.027) 0:03:32.232 ************ 2025-05-23 01:07:21.560752 | orchestrator | =============================================================================== 2025-05-23 01:07:21.560763 | orchestrator | glance : Restart glance-api container ---------------------------------- 38.03s 2025-05-23 01:07:21.560773 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 26.89s 2025-05-23 01:07:21.560784 | orchestrator | glance : Copying over glance-api.conf ---------------------------------- 17.19s 2025-05-23 01:07:21.560848 | orchestrator | glance : Copying over property-protections-rules.conf ------------------ 14.31s 2025-05-23 01:07:21.560860 | orchestrator | glance : Copying over glance-image-import.conf ------------------------- 11.40s 2025-05-23 01:07:21.560871 | orchestrator | glance : Ensuring glance service ceph config subdir exists ------------- 10.19s 2025-05-23 01:07:21.560882 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 9.22s 2025-05-23 01:07:21.560892 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 8.98s 2025-05-23 01:07:21.560903 | orchestrator | glance : Copying over config.json files for services -------------------- 7.59s 2025-05-23 01:07:21.560914 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.48s 2025-05-23 01:07:21.560924 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 5.53s 2025-05-23 01:07:21.560935 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.19s 2025-05-23 01:07:21.560945 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 4.05s 2025-05-23 01:07:21.560956 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.03s 2025-05-23 01:07:21.560967 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.90s 2025-05-23 01:07:21.560978 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 3.82s 2025-05-23 01:07:21.560988 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.74s 2025-05-23 01:07:21.561006 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.40s 2025-05-23 01:07:21.561017 | orchestrator | glance : Check glance containers ---------------------------------------- 3.32s 2025-05-23 01:07:21.561034 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.26s 2025-05-23 01:07:21.561045 | orchestrator | 2025-05-23 01:07:21 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:07:21.567577 | orchestrator | 2025-05-23 01:07:21 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:07:21.568654 | orchestrator | 2025-05-23 01:07:21 | INFO  | Task ea0872cc-e760-46b7-83b5-261a6e9f2467 is in state STARTED 2025-05-23 01:07:21.570255 | orchestrator | 2025-05-23 01:07:21 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:07:21.573909 | orchestrator | 2025-05-23 01:07:21 | INFO  | Task 19898d2f-a9f5-4e6a-a6d3-a5c54ef86506 is in state STARTED 2025-05-23 01:07:21.573954 | orchestrator | 2025-05-23 01:07:21 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:07:24.628537 | orchestrator | 2025-05-23 01:07:24 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:07:24.631498 | orchestrator | 2025-05-23 01:07:24 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:07:24.633778 | orchestrator | 2025-05-23 01:07:24 | INFO  | Task ea0872cc-e760-46b7-83b5-261a6e9f2467 is in state STARTED 2025-05-23 01:07:24.636515 | orchestrator | 2025-05-23 01:07:24 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:07:24.639173 | orchestrator | 2025-05-23 01:07:24 | INFO  | Task 19898d2f-a9f5-4e6a-a6d3-a5c54ef86506 is in state STARTED 2025-05-23 01:07:24.639247 | orchestrator | 2025-05-23 01:07:24 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:07:27.701746 | orchestrator | 2025-05-23 01:07:27 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:07:27.703126 | orchestrator | 2025-05-23 01:07:27 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:07:27.706411 | orchestrator | 2025-05-23 01:07:27 | INFO  | Task ea0872cc-e760-46b7-83b5-261a6e9f2467 is in state STARTED 2025-05-23 01:07:27.708398 | orchestrator | 2025-05-23 01:07:27 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:07:27.710620 | orchestrator | 2025-05-23 01:07:27 | INFO  | Task 19898d2f-a9f5-4e6a-a6d3-a5c54ef86506 is in state STARTED 2025-05-23 01:07:27.710655 | orchestrator | 2025-05-23 01:07:27 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:07:30.782852 | orchestrator | 2025-05-23 01:07:30 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state STARTED 2025-05-23 01:07:30.783235 | orchestrator | 2025-05-23 01:07:30 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:07:30.784291 | orchestrator | 2025-05-23 01:07:30 | INFO  | Task ea0872cc-e760-46b7-83b5-261a6e9f2467 is in state STARTED 2025-05-23 01:07:30.784561 | orchestrator | 2025-05-23 01:07:30 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:07:30.785619 | orchestrator | 2025-05-23 01:07:30 | INFO  | Task 19898d2f-a9f5-4e6a-a6d3-a5c54ef86506 is in state STARTED 2025-05-23 01:07:30.785684 | orchestrator | 2025-05-23 01:07:30 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:07:33.889004 | orchestrator | 2025-05-23 01:07:33 | INFO  | Task f913b611-7483-4410-a161-ca050b1da473 is in state SUCCESS 2025-05-23 01:07:33.889606 | orchestrator | 2025-05-23 01:07:33.892159 | orchestrator | 2025-05-23 01:07:33.892345 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-23 01:07:33.892399 | orchestrator | 2025-05-23 01:07:33.892413 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-23 01:07:33.892469 | orchestrator | Friday 23 May 2025 01:04:16 +0000 (0:00:00.261) 0:00:00.261 ************ 2025-05-23 01:07:33.892483 | orchestrator | ok: [testbed-node-0] 2025-05-23 01:07:33.892816 | orchestrator | ok: [testbed-node-1] 2025-05-23 01:07:33.892834 | orchestrator | ok: [testbed-node-2] 2025-05-23 01:07:33.892845 | orchestrator | ok: [testbed-node-3] 2025-05-23 01:07:33.892860 | orchestrator | ok: [testbed-node-4] 2025-05-23 01:07:33.892879 | orchestrator | ok: [testbed-node-5] 2025-05-23 01:07:33.892896 | orchestrator | 2025-05-23 01:07:33.892913 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-23 01:07:33.892925 | orchestrator | Friday 23 May 2025 01:04:17 +0000 (0:00:00.603) 0:00:00.865 ************ 2025-05-23 01:07:33.892936 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-05-23 01:07:33.892947 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-05-23 01:07:33.892958 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-05-23 01:07:33.893022 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-05-23 01:07:33.893034 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-05-23 01:07:33.893045 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-05-23 01:07:33.893056 | orchestrator | 2025-05-23 01:07:33.893066 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-05-23 01:07:33.893077 | orchestrator | 2025-05-23 01:07:33.893088 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-23 01:07:33.893129 | orchestrator | Friday 23 May 2025 01:04:18 +0000 (0:00:01.149) 0:00:02.015 ************ 2025-05-23 01:07:33.893144 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 01:07:33.893156 | orchestrator | 2025-05-23 01:07:33.893167 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-05-23 01:07:33.893178 | orchestrator | Friday 23 May 2025 01:04:21 +0000 (0:00:02.994) 0:00:05.009 ************ 2025-05-23 01:07:33.893189 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-05-23 01:07:33.893200 | orchestrator | 2025-05-23 01:07:33.893210 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-05-23 01:07:33.893221 | orchestrator | Friday 23 May 2025 01:04:24 +0000 (0:00:03.326) 0:00:08.336 ************ 2025-05-23 01:07:33.893343 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-05-23 01:07:33.893357 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-05-23 01:07:33.893368 | orchestrator | 2025-05-23 01:07:33.893379 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-05-23 01:07:33.893390 | orchestrator | Friday 23 May 2025 01:04:31 +0000 (0:00:06.401) 0:00:14.738 ************ 2025-05-23 01:07:33.893403 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-23 01:07:33.893869 | orchestrator | 2025-05-23 01:07:33.893880 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-05-23 01:07:33.893891 | orchestrator | Friday 23 May 2025 01:04:34 +0000 (0:00:03.238) 0:00:17.977 ************ 2025-05-23 01:07:33.893902 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-23 01:07:33.893913 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-05-23 01:07:33.893923 | orchestrator | 2025-05-23 01:07:33.893934 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-05-23 01:07:33.893945 | orchestrator | Friday 23 May 2025 01:04:38 +0000 (0:00:03.890) 0:00:21.867 ************ 2025-05-23 01:07:33.893955 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-23 01:07:33.893966 | orchestrator | 2025-05-23 01:07:33.893976 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-05-23 01:07:33.894000 | orchestrator | Friday 23 May 2025 01:04:41 +0000 (0:00:03.341) 0:00:25.208 ************ 2025-05-23 01:07:33.894011 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-05-23 01:07:33.894485 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-05-23 01:07:33.894507 | orchestrator | 2025-05-23 01:07:33.894519 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-05-23 01:07:33.894530 | orchestrator | Friday 23 May 2025 01:04:50 +0000 (0:00:08.712) 0:00:33.921 ************ 2025-05-23 01:07:33.894591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-23 01:07:33.894609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-23 01:07:33.894621 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-23 01:07:33.894634 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.894654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-23 01:07:33.894677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.894721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.894735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-23 01:07:33.894747 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-23 01:07:33.894765 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.894843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-23 01:07:33.894894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.894908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.894920 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-23 01:07:33.894931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-23 01:07:33.894956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.894998 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-23 01:07:33.895012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.895023 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-23 01:07:33.895035 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-23 01:07:33.895059 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-23 01:07:33.895071 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.895114 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-23 01:07:33.895128 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-23 01:07:33.895142 | orchestrator | 2025-05-23 01:07:33.895154 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-23 01:07:33.895168 | orchestrator | Friday 23 May 2025 01:04:53 +0000 (0:00:03.535) 0:00:37.457 ************ 2025-05-23 01:07:33.895180 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:07:33.895193 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:07:33.895205 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:07:33.895216 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 01:07:33.895234 | orchestrator | 2025-05-23 01:07:33.895245 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-05-23 01:07:33.895256 | orchestrator | Friday 23 May 2025 01:04:56 +0000 (0:00:02.947) 0:00:40.405 ************ 2025-05-23 01:07:33.895268 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-05-23 01:07:33.895279 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-05-23 01:07:33.895290 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-05-23 01:07:33.895304 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-05-23 01:07:33.895321 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-05-23 01:07:33.895331 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-05-23 01:07:33.895340 | orchestrator | 2025-05-23 01:07:33.895350 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-05-23 01:07:33.895360 | orchestrator | Friday 23 May 2025 01:05:02 +0000 (0:00:05.466) 0:00:45.871 ************ 2025-05-23 01:07:33.895375 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-23 01:07:33.895386 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-23 01:07:33.895426 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-23 01:07:33.895438 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-23 01:07:33.895461 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-23 01:07:33.895475 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-23 01:07:33.895486 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-23 01:07:33.895569 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-23 01:07:33.895584 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-23 01:07:33.895601 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-23 01:07:33.895617 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-23 01:07:33.895654 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-23 01:07:33.895666 | orchestrator | 2025-05-23 01:07:33.895676 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-05-23 01:07:33.895685 | orchestrator | Friday 23 May 2025 01:05:08 +0000 (0:00:05.896) 0:00:51.767 ************ 2025-05-23 01:07:33.895695 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-23 01:07:33.895705 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-23 01:07:33.895715 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-23 01:07:33.895724 | orchestrator | 2025-05-23 01:07:33.895734 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-05-23 01:07:33.895743 | orchestrator | Friday 23 May 2025 01:05:10 +0000 (0:00:02.367) 0:00:54.135 ************ 2025-05-23 01:07:33.895753 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-05-23 01:07:33.895768 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-05-23 01:07:33.895800 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-05-23 01:07:33.895810 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-05-23 01:07:33.895819 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-05-23 01:07:33.895829 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-05-23 01:07:33.895838 | orchestrator | 2025-05-23 01:07:33.895848 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-05-23 01:07:33.895857 | orchestrator | Friday 23 May 2025 01:05:13 +0000 (0:00:02.922) 0:00:57.058 ************ 2025-05-23 01:07:33.895867 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-05-23 01:07:33.895876 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-05-23 01:07:33.895886 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-05-23 01:07:33.895895 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-05-23 01:07:33.895905 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-05-23 01:07:33.895914 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-05-23 01:07:33.895924 | orchestrator | 2025-05-23 01:07:33.895933 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-05-23 01:07:33.895943 | orchestrator | Friday 23 May 2025 01:05:14 +0000 (0:00:01.080) 0:00:58.138 ************ 2025-05-23 01:07:33.895952 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:07:33.895962 | orchestrator | 2025-05-23 01:07:33.895971 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-05-23 01:07:33.895981 | orchestrator | Friday 23 May 2025 01:05:14 +0000 (0:00:00.158) 0:00:58.296 ************ 2025-05-23 01:07:33.895990 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:07:33.895999 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:07:33.896009 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:07:33.896018 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:07:33.896027 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:07:33.896037 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:07:33.896046 | orchestrator | 2025-05-23 01:07:33.896056 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-23 01:07:33.896065 | orchestrator | Friday 23 May 2025 01:05:15 +0000 (0:00:00.848) 0:00:59.145 ************ 2025-05-23 01:07:33.896076 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 01:07:33.896087 | orchestrator | 2025-05-23 01:07:33.896097 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-05-23 01:07:33.896106 | orchestrator | Friday 23 May 2025 01:05:18 +0000 (0:00:02.400) 0:01:01.546 ************ 2025-05-23 01:07:33.896121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-23 01:07:33.896162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-23 01:07:33.896181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-23 01:07:33.896191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-23 01:07:33.896201 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-23 01:07:33.896216 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-23 01:07:33.896260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-23 01:07:33.896273 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-23 01:07:33.896285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-23 01:07:33.896297 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-23 01:07:33.896313 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-23 01:07:33.896326 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-23 01:07:33.896344 | orchestrator | 2025-05-23 01:07:33.896355 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-05-23 01:07:33.896367 | orchestrator | Friday 23 May 2025 01:05:21 +0000 (0:00:03.751) 0:01:05.298 ************ 2025-05-23 01:07:33.896404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-23 01:07:33.896416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.896426 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:07:33.896436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-23 01:07:33.896446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.896456 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:07:33.896471 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.896513 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.896525 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:07:33.896535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-23 01:07:33.896545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.896555 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:07:33.896565 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.896579 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.896595 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:07:33.896630 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.896642 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.896651 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:07:33.896661 | orchestrator | 2025-05-23 01:07:33.896671 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-05-23 01:07:33.896681 | orchestrator | Friday 23 May 2025 01:05:23 +0000 (0:00:01.741) 0:01:07.039 ************ 2025-05-23 01:07:33.896691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-23 01:07:33.896701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.896720 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:07:33.896734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-23 01:07:33.896773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.896799 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:07:33.896810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-23 01:07:33.896820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.896830 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:07:33.896840 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.896860 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.896870 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:07:33.896907 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.896919 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.896929 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:07:33.896938 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.896949 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.896964 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:07:33.896974 | orchestrator | 2025-05-23 01:07:33.896983 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-05-23 01:07:33.896993 | orchestrator | Friday 23 May 2025 01:05:26 +0000 (0:00:02.729) 0:01:09.769 ************ 2025-05-23 01:07:33.897008 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-23 01:07:33.897044 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.897056 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-23 01:07:33.897066 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.897077 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-23 01:07:33.897097 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.897107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-23 01:07:33.897144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-23 01:07:33.897156 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-23 01:07:33.897166 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-23 01:07:33.897187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-23 01:07:33.897222 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-23 01:07:33.897234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-23 01:07:33.897244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.897260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.897275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-23 01:07:33.897291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.897335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.897347 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-23 01:07:33.897357 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-23 01:07:33.897373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-23 01:07:33.897387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.897402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.897412 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-23 01:07:33.897422 | orchestrator | 2025-05-23 01:07:33.897432 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-05-23 01:07:33.897441 | orchestrator | Friday 23 May 2025 01:05:29 +0000 (0:00:03.146) 0:01:12.916 ************ 2025-05-23 01:07:33.897451 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-23 01:07:33.897461 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:07:33.897470 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-23 01:07:33.897480 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:07:33.897583 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-23 01:07:33.897598 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:07:33.897608 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-23 01:07:33.897618 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-23 01:07:33.897627 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-23 01:07:33.897637 | orchestrator | 2025-05-23 01:07:33.897646 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-05-23 01:07:33.897656 | orchestrator | Friday 23 May 2025 01:05:32 +0000 (0:00:03.256) 0:01:16.172 ************ 2025-05-23 01:07:33.897666 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-23 01:07:33.897685 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.897703 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-23 01:07:33.897714 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.897724 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-23 01:07:33.897743 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.897757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-23 01:07:33.897768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-23 01:07:33.897805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-23 01:07:33.897823 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-23 01:07:33.897834 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-23 01:07:33.897848 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-23 01:07:33.897858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-23 01:07:33.897873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.897890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.897900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-23 01:07:33.897915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.897925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.897942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-23 01:07:33.897952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.897968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.897978 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-23 01:07:33.897992 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-23 01:07:33.898003 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-23 01:07:33.898013 | orchestrator | 2025-05-23 01:07:33.898061 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-05-23 01:07:33.898072 | orchestrator | Friday 23 May 2025 01:05:43 +0000 (0:00:10.903) 0:01:27.075 ************ 2025-05-23 01:07:33.898082 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:07:33.898092 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:07:33.898107 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:07:33.898117 | orchestrator | changed: [testbed-node-4] 2025-05-23 01:07:33.898126 | orchestrator | changed: [testbed-node-3] 2025-05-23 01:07:33.898136 | orchestrator | changed: [testbed-node-5] 2025-05-23 01:07:33.898145 | orchestrator | 2025-05-23 01:07:33.898155 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-05-23 01:07:33.898165 | orchestrator | Friday 23 May 2025 01:05:49 +0000 (0:00:05.476) 0:01:32.551 ************ 2025-05-23 01:07:33.898175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-23 01:07:33.898185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.898196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.898210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.898226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-23 01:07:33.898244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.898256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.898267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.898278 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:07:33.898294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-23 01:07:33.898310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.898328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.898339 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:07:33.898351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.898362 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:07:33.898374 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-23 01:07:33.898390 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.898402 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.898429 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.898441 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-23 01:07:33.898452 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:07:33.898464 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.898479 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-23 01:07:33.898492 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.898515 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.898527 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.898537 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:07:33.898547 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.898557 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.898567 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:07:33.898576 | orchestrator | 2025-05-23 01:07:33.898586 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-05-23 01:07:33.898600 | orchestrator | Friday 23 May 2025 01:05:52 +0000 (0:00:02.987) 0:01:35.539 ************ 2025-05-23 01:07:33.898610 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:07:33.898628 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:07:33.898638 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:07:33.898647 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:07:33.898656 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:07:33.898666 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:07:33.898675 | orchestrator | 2025-05-23 01:07:33.898685 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-05-23 01:07:33.898694 | orchestrator | Friday 23 May 2025 01:05:53 +0000 (0:00:01.840) 0:01:37.379 ************ 2025-05-23 01:07:33.898709 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-23 01:07:33.898720 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.898731 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-23 01:07:33.898741 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.898755 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-23 01:07:33.898771 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.898808 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-23 01:07:33.898819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-23 01:07:33.898829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-23 01:07:33.898844 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-23 01:07:33.898865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-23 01:07:33.898875 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-23 01:07:33.898886 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-23 01:07:33.898896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-23 01:07:33.898910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.898926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.898942 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-23 01:07:33.898952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-23 01:07:33.898962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-23 01:07:33.898972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.898991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.899006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.899017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-23 01:07:33.899027 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-23 01:07:33.899037 | orchestrator | 2025-05-23 01:07:33.899047 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-23 01:07:33.899057 | orchestrator | Friday 23 May 2025 01:05:58 +0000 (0:00:04.344) 0:01:41.723 ************ 2025-05-23 01:07:33.899066 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:07:33.899076 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:07:33.899086 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:07:33.899095 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:07:33.899105 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:07:33.899114 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:07:33.899129 | orchestrator | 2025-05-23 01:07:33.899138 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-05-23 01:07:33.899148 | orchestrator | Friday 23 May 2025 01:05:58 +0000 (0:00:00.590) 0:01:42.314 ************ 2025-05-23 01:07:33.899157 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:07:33.899167 | orchestrator | 2025-05-23 01:07:33.899176 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-05-23 01:07:33.899186 | orchestrator | Friday 23 May 2025 01:06:01 +0000 (0:00:02.290) 0:01:44.604 ************ 2025-05-23 01:07:33.899195 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:07:33.899205 | orchestrator | 2025-05-23 01:07:33.899214 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-05-23 01:07:33.899224 | orchestrator | Friday 23 May 2025 01:06:03 +0000 (0:00:02.179) 0:01:46.783 ************ 2025-05-23 01:07:33.899234 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:07:33.899243 | orchestrator | 2025-05-23 01:07:33.899252 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-23 01:07:33.899262 | orchestrator | Friday 23 May 2025 01:06:20 +0000 (0:00:17.282) 0:02:04.066 ************ 2025-05-23 01:07:33.899271 | orchestrator | 2025-05-23 01:07:33.899281 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-23 01:07:33.899290 | orchestrator | Friday 23 May 2025 01:06:20 +0000 (0:00:00.061) 0:02:04.127 ************ 2025-05-23 01:07:33.899300 | orchestrator | 2025-05-23 01:07:33.899309 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-23 01:07:33.899319 | orchestrator | Friday 23 May 2025 01:06:20 +0000 (0:00:00.259) 0:02:04.387 ************ 2025-05-23 01:07:33.899328 | orchestrator | 2025-05-23 01:07:33.899341 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-23 01:07:33.899351 | orchestrator | Friday 23 May 2025 01:06:20 +0000 (0:00:00.062) 0:02:04.449 ************ 2025-05-23 01:07:33.899361 | orchestrator | 2025-05-23 01:07:33.899370 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-23 01:07:33.899380 | orchestrator | Friday 23 May 2025 01:06:21 +0000 (0:00:00.055) 0:02:04.504 ************ 2025-05-23 01:07:33.899389 | orchestrator | 2025-05-23 01:07:33.899399 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-23 01:07:33.899409 | orchestrator | Friday 23 May 2025 01:06:21 +0000 (0:00:00.056) 0:02:04.561 ************ 2025-05-23 01:07:33.899418 | orchestrator | 2025-05-23 01:07:33.899428 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-05-23 01:07:33.899437 | orchestrator | Friday 23 May 2025 01:06:21 +0000 (0:00:00.267) 0:02:04.829 ************ 2025-05-23 01:07:33.899447 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:07:33.899456 | orchestrator | changed: [testbed-node-1] 2025-05-23 01:07:33.899466 | orchestrator | changed: [testbed-node-2] 2025-05-23 01:07:33.899475 | orchestrator | 2025-05-23 01:07:33.899485 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-05-23 01:07:33.899494 | orchestrator | Friday 23 May 2025 01:06:40 +0000 (0:00:19.461) 0:02:24.290 ************ 2025-05-23 01:07:33.899504 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:07:33.899513 | orchestrator | changed: [testbed-node-1] 2025-05-23 01:07:33.899523 | orchestrator | changed: [testbed-node-2] 2025-05-23 01:07:33.899532 | orchestrator | 2025-05-23 01:07:33.899542 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-05-23 01:07:33.899557 | orchestrator | Friday 23 May 2025 01:06:53 +0000 (0:00:12.358) 0:02:36.648 ************ 2025-05-23 01:07:33.899567 | orchestrator | changed: [testbed-node-4] 2025-05-23 01:07:33.899576 | orchestrator | changed: [testbed-node-5] 2025-05-23 01:07:33.899586 | orchestrator | changed: [testbed-node-3] 2025-05-23 01:07:33.899595 | orchestrator | 2025-05-23 01:07:33.899605 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-05-23 01:07:33.899614 | orchestrator | Friday 23 May 2025 01:07:17 +0000 (0:00:24.177) 0:03:00.826 ************ 2025-05-23 01:07:33.899624 | orchestrator | changed: [testbed-node-3] 2025-05-23 01:07:33.899642 | orchestrator | changed: [testbed-node-4] 2025-05-23 01:07:33.899651 | orchestrator | changed: [testbed-node-5] 2025-05-23 01:07:33.899661 | orchestrator | 2025-05-23 01:07:33.899670 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-05-23 01:07:33.899680 | orchestrator | Friday 23 May 2025 01:07:29 +0000 (0:00:12.396) 0:03:13.222 ************ 2025-05-23 01:07:33.899689 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:07:33.899699 | orchestrator | 2025-05-23 01:07:33.899709 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 01:07:33.899718 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-23 01:07:33.899729 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-23 01:07:33.899739 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-23 01:07:33.899748 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-23 01:07:33.899758 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-23 01:07:33.899767 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-23 01:07:33.899831 | orchestrator | 2025-05-23 01:07:33.899843 | orchestrator | 2025-05-23 01:07:33.899853 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-23 01:07:33.899862 | orchestrator | Friday 23 May 2025 01:07:30 +0000 (0:00:00.720) 0:03:13.943 ************ 2025-05-23 01:07:33.899870 | orchestrator | =============================================================================== 2025-05-23 01:07:33.899878 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 24.18s 2025-05-23 01:07:33.899885 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 19.46s 2025-05-23 01:07:33.899893 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 17.28s 2025-05-23 01:07:33.899901 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 12.40s 2025-05-23 01:07:33.899909 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 12.36s 2025-05-23 01:07:33.899917 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 10.90s 2025-05-23 01:07:33.899924 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.71s 2025-05-23 01:07:33.899932 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.40s 2025-05-23 01:07:33.899940 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 5.90s 2025-05-23 01:07:33.899948 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 5.48s 2025-05-23 01:07:33.899956 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 5.47s 2025-05-23 01:07:33.899963 | orchestrator | cinder : Check cinder containers ---------------------------------------- 4.34s 2025-05-23 01:07:33.899975 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.89s 2025-05-23 01:07:33.899983 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.75s 2025-05-23 01:07:33.899991 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 3.54s 2025-05-23 01:07:33.899999 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.34s 2025-05-23 01:07:33.900007 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.33s 2025-05-23 01:07:33.900015 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 3.26s 2025-05-23 01:07:33.900027 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.24s 2025-05-23 01:07:33.900035 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.15s 2025-05-23 01:07:33.900043 | orchestrator | 2025-05-23 01:07:33 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:07:33.900051 | orchestrator | 2025-05-23 01:07:33 | INFO  | Task ea0872cc-e760-46b7-83b5-261a6e9f2467 is in state STARTED 2025-05-23 01:07:33.900059 | orchestrator | 2025-05-23 01:07:33 | INFO  | Task af18efb5-560f-4cd4-a303-0388cd55d1e4 is in state STARTED 2025-05-23 01:07:33.900449 | orchestrator | 2025-05-23 01:07:33 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:07:33.902560 | orchestrator | 2025-05-23 01:07:33 | INFO  | Task 19898d2f-a9f5-4e6a-a6d3-a5c54ef86506 is in state STARTED 2025-05-23 01:07:33.902577 | orchestrator | 2025-05-23 01:07:33 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:07:36.960234 | orchestrator | 2025-05-23 01:07:36 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:07:36.964641 | orchestrator | 2025-05-23 01:07:36 | INFO  | Task ea0872cc-e760-46b7-83b5-261a6e9f2467 is in state STARTED 2025-05-23 01:07:36.967477 | orchestrator | 2025-05-23 01:07:36 | INFO  | Task af18efb5-560f-4cd4-a303-0388cd55d1e4 is in state STARTED 2025-05-23 01:07:36.971624 | orchestrator | 2025-05-23 01:07:36 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:07:36.974164 | orchestrator | 2025-05-23 01:07:36 | INFO  | Task 19898d2f-a9f5-4e6a-a6d3-a5c54ef86506 is in state STARTED 2025-05-23 01:07:36.975398 | orchestrator | 2025-05-23 01:07:36 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:07:40.029817 | orchestrator | 2025-05-23 01:07:40 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:07:40.030813 | orchestrator | 2025-05-23 01:07:40 | INFO  | Task ea0872cc-e760-46b7-83b5-261a6e9f2467 is in state STARTED 2025-05-23 01:07:40.031053 | orchestrator | 2025-05-23 01:07:40 | INFO  | Task af18efb5-560f-4cd4-a303-0388cd55d1e4 is in state STARTED 2025-05-23 01:07:40.031875 | orchestrator | 2025-05-23 01:07:40 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:07:40.033122 | orchestrator | 2025-05-23 01:07:40 | INFO  | Task 19898d2f-a9f5-4e6a-a6d3-a5c54ef86506 is in state STARTED 2025-05-23 01:07:40.033283 | orchestrator | 2025-05-23 01:07:40 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:07:43.081599 | orchestrator | 2025-05-23 01:07:43 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:07:43.083089 | orchestrator | 2025-05-23 01:07:43 | INFO  | Task ea0872cc-e760-46b7-83b5-261a6e9f2467 is in state STARTED 2025-05-23 01:07:43.086978 | orchestrator | 2025-05-23 01:07:43 | INFO  | Task af18efb5-560f-4cd4-a303-0388cd55d1e4 is in state STARTED 2025-05-23 01:07:43.087893 | orchestrator | 2025-05-23 01:07:43 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:07:43.090121 | orchestrator | 2025-05-23 01:07:43 | INFO  | Task 19898d2f-a9f5-4e6a-a6d3-a5c54ef86506 is in state STARTED 2025-05-23 01:07:43.090502 | orchestrator | 2025-05-23 01:07:43 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:07:46.143557 | orchestrator | 2025-05-23 01:07:46 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:07:46.145407 | orchestrator | 2025-05-23 01:07:46 | INFO  | Task ea0872cc-e760-46b7-83b5-261a6e9f2467 is in state STARTED 2025-05-23 01:07:46.147538 | orchestrator | 2025-05-23 01:07:46 | INFO  | Task af18efb5-560f-4cd4-a303-0388cd55d1e4 is in state STARTED 2025-05-23 01:07:46.148834 | orchestrator | 2025-05-23 01:07:46 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:07:46.149983 | orchestrator | 2025-05-23 01:07:46 | INFO  | Task 19898d2f-a9f5-4e6a-a6d3-a5c54ef86506 is in state STARTED 2025-05-23 01:07:46.150009 | orchestrator | 2025-05-23 01:07:46 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:07:49.197305 | orchestrator | 2025-05-23 01:07:49 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:07:49.198562 | orchestrator | 2025-05-23 01:07:49 | INFO  | Task ea0872cc-e760-46b7-83b5-261a6e9f2467 is in state STARTED 2025-05-23 01:07:49.200255 | orchestrator | 2025-05-23 01:07:49 | INFO  | Task af18efb5-560f-4cd4-a303-0388cd55d1e4 is in state STARTED 2025-05-23 01:07:49.201607 | orchestrator | 2025-05-23 01:07:49 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:07:49.202799 | orchestrator | 2025-05-23 01:07:49 | INFO  | Task 19898d2f-a9f5-4e6a-a6d3-a5c54ef86506 is in state STARTED 2025-05-23 01:07:49.202903 | orchestrator | 2025-05-23 01:07:49 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:07:52.253266 | orchestrator | 2025-05-23 01:07:52 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:07:52.253372 | orchestrator | 2025-05-23 01:07:52 | INFO  | Task ea0872cc-e760-46b7-83b5-261a6e9f2467 is in state STARTED 2025-05-23 01:07:52.256086 | orchestrator | 2025-05-23 01:07:52 | INFO  | Task af18efb5-560f-4cd4-a303-0388cd55d1e4 is in state STARTED 2025-05-23 01:07:52.256858 | orchestrator | 2025-05-23 01:07:52 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:07:52.257886 | orchestrator | 2025-05-23 01:07:52 | INFO  | Task 19898d2f-a9f5-4e6a-a6d3-a5c54ef86506 is in state STARTED 2025-05-23 01:07:52.257909 | orchestrator | 2025-05-23 01:07:52 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:07:55.305125 | orchestrator | 2025-05-23 01:07:55 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:07:55.305224 | orchestrator | 2025-05-23 01:07:55 | INFO  | Task ea0872cc-e760-46b7-83b5-261a6e9f2467 is in state STARTED 2025-05-23 01:07:55.308282 | orchestrator | 2025-05-23 01:07:55 | INFO  | Task af18efb5-560f-4cd4-a303-0388cd55d1e4 is in state STARTED 2025-05-23 01:07:55.308387 | orchestrator | 2025-05-23 01:07:55 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:07:55.308403 | orchestrator | 2025-05-23 01:07:55 | INFO  | Task 19898d2f-a9f5-4e6a-a6d3-a5c54ef86506 is in state STARTED 2025-05-23 01:07:55.308415 | orchestrator | 2025-05-23 01:07:55 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:07:58.355847 | orchestrator | 2025-05-23 01:07:58 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:07:58.358178 | orchestrator | 2025-05-23 01:07:58 | INFO  | Task ea0872cc-e760-46b7-83b5-261a6e9f2467 is in state STARTED 2025-05-23 01:07:58.359439 | orchestrator | 2025-05-23 01:07:58 | INFO  | Task af18efb5-560f-4cd4-a303-0388cd55d1e4 is in state STARTED 2025-05-23 01:07:58.360902 | orchestrator | 2025-05-23 01:07:58 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:07:58.362373 | orchestrator | 2025-05-23 01:07:58 | INFO  | Task 19898d2f-a9f5-4e6a-a6d3-a5c54ef86506 is in state STARTED 2025-05-23 01:07:58.362408 | orchestrator | 2025-05-23 01:07:58 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:08:01.418664 | orchestrator | 2025-05-23 01:08:01 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:08:01.421078 | orchestrator | 2025-05-23 01:08:01 | INFO  | Task ea0872cc-e760-46b7-83b5-261a6e9f2467 is in state STARTED 2025-05-23 01:08:01.422697 | orchestrator | 2025-05-23 01:08:01 | INFO  | Task af18efb5-560f-4cd4-a303-0388cd55d1e4 is in state STARTED 2025-05-23 01:08:01.425651 | orchestrator | 2025-05-23 01:08:01 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:08:01.427406 | orchestrator | 2025-05-23 01:08:01 | INFO  | Task 19898d2f-a9f5-4e6a-a6d3-a5c54ef86506 is in state STARTED 2025-05-23 01:08:01.427453 | orchestrator | 2025-05-23 01:08:01 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:08:04.482114 | orchestrator | 2025-05-23 01:08:04 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:08:04.483568 | orchestrator | 2025-05-23 01:08:04 | INFO  | Task ea0872cc-e760-46b7-83b5-261a6e9f2467 is in state STARTED 2025-05-23 01:08:04.485439 | orchestrator | 2025-05-23 01:08:04 | INFO  | Task af18efb5-560f-4cd4-a303-0388cd55d1e4 is in state STARTED 2025-05-23 01:08:04.487534 | orchestrator | 2025-05-23 01:08:04 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:08:04.490066 | orchestrator | 2025-05-23 01:08:04 | INFO  | Task 19898d2f-a9f5-4e6a-a6d3-a5c54ef86506 is in state STARTED 2025-05-23 01:08:04.490116 | orchestrator | 2025-05-23 01:08:04 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:08:07.541441 | orchestrator | 2025-05-23 01:08:07 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:08:07.542597 | orchestrator | 2025-05-23 01:08:07 | INFO  | Task ea0872cc-e760-46b7-83b5-261a6e9f2467 is in state STARTED 2025-05-23 01:08:07.544060 | orchestrator | 2025-05-23 01:08:07 | INFO  | Task af18efb5-560f-4cd4-a303-0388cd55d1e4 is in state STARTED 2025-05-23 01:08:07.545948 | orchestrator | 2025-05-23 01:08:07 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:08:07.547829 | orchestrator | 2025-05-23 01:08:07 | INFO  | Task 19898d2f-a9f5-4e6a-a6d3-a5c54ef86506 is in state STARTED 2025-05-23 01:08:07.547913 | orchestrator | 2025-05-23 01:08:07 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:08:10.599510 | orchestrator | 2025-05-23 01:08:10 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:08:10.601180 | orchestrator | 2025-05-23 01:08:10 | INFO  | Task ea0872cc-e760-46b7-83b5-261a6e9f2467 is in state STARTED 2025-05-23 01:08:10.602921 | orchestrator | 2025-05-23 01:08:10 | INFO  | Task af18efb5-560f-4cd4-a303-0388cd55d1e4 is in state STARTED 2025-05-23 01:08:10.604040 | orchestrator | 2025-05-23 01:08:10 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:08:10.605388 | orchestrator | 2025-05-23 01:08:10 | INFO  | Task 19898d2f-a9f5-4e6a-a6d3-a5c54ef86506 is in state STARTED 2025-05-23 01:08:10.605421 | orchestrator | 2025-05-23 01:08:10 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:08:13.647017 | orchestrator | 2025-05-23 01:08:13 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:08:13.647949 | orchestrator | 2025-05-23 01:08:13 | INFO  | Task ea0872cc-e760-46b7-83b5-261a6e9f2467 is in state STARTED 2025-05-23 01:08:13.650706 | orchestrator | 2025-05-23 01:08:13 | INFO  | Task af18efb5-560f-4cd4-a303-0388cd55d1e4 is in state STARTED 2025-05-23 01:08:13.652623 | orchestrator | 2025-05-23 01:08:13 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:08:13.655666 | orchestrator | 2025-05-23 01:08:13 | INFO  | Task 19898d2f-a9f5-4e6a-a6d3-a5c54ef86506 is in state STARTED 2025-05-23 01:08:13.655782 | orchestrator | 2025-05-23 01:08:13 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:08:16.701418 | orchestrator | 2025-05-23 01:08:16 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:08:16.702334 | orchestrator | 2025-05-23 01:08:16 | INFO  | Task ea0872cc-e760-46b7-83b5-261a6e9f2467 is in state SUCCESS 2025-05-23 01:08:16.702637 | orchestrator | 2025-05-23 01:08:16 | INFO  | Task af18efb5-560f-4cd4-a303-0388cd55d1e4 is in state STARTED 2025-05-23 01:08:16.704840 | orchestrator | 2025-05-23 01:08:16 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:08:16.706996 | orchestrator | 2025-05-23 01:08:16 | INFO  | Task 19898d2f-a9f5-4e6a-a6d3-a5c54ef86506 is in state STARTED 2025-05-23 01:08:16.707761 | orchestrator | 2025-05-23 01:08:16 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:08:19.765758 | orchestrator | 2025-05-23 01:08:19 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:08:19.765880 | orchestrator | 2025-05-23 01:08:19 | INFO  | Task af18efb5-560f-4cd4-a303-0388cd55d1e4 is in state STARTED 2025-05-23 01:08:19.771359 | orchestrator | 2025-05-23 01:08:19 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:08:19.772510 | orchestrator | 2025-05-23 01:08:19 | INFO  | Task 19898d2f-a9f5-4e6a-a6d3-a5c54ef86506 is in state STARTED 2025-05-23 01:08:19.772551 | orchestrator | 2025-05-23 01:08:19 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:08:22.819227 | orchestrator | 2025-05-23 01:08:22 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:08:22.821577 | orchestrator | 2025-05-23 01:08:22 | INFO  | Task af18efb5-560f-4cd4-a303-0388cd55d1e4 is in state STARTED 2025-05-23 01:08:22.823839 | orchestrator | 2025-05-23 01:08:22 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:08:22.825862 | orchestrator | 2025-05-23 01:08:22 | INFO  | Task 19898d2f-a9f5-4e6a-a6d3-a5c54ef86506 is in state STARTED 2025-05-23 01:08:22.826274 | orchestrator | 2025-05-23 01:08:22 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:08:25.879325 | orchestrator | 2025-05-23 01:08:25 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:08:25.880282 | orchestrator | 2025-05-23 01:08:25 | INFO  | Task af18efb5-560f-4cd4-a303-0388cd55d1e4 is in state STARTED 2025-05-23 01:08:25.883467 | orchestrator | 2025-05-23 01:08:25 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:08:25.886257 | orchestrator | 2025-05-23 01:08:25 | INFO  | Task 19898d2f-a9f5-4e6a-a6d3-a5c54ef86506 is in state STARTED 2025-05-23 01:08:25.886368 | orchestrator | 2025-05-23 01:08:25 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:08:28.927423 | orchestrator | 2025-05-23 01:08:28 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:08:28.928192 | orchestrator | 2025-05-23 01:08:28 | INFO  | Task af18efb5-560f-4cd4-a303-0388cd55d1e4 is in state STARTED 2025-05-23 01:08:28.928946 | orchestrator | 2025-05-23 01:08:28 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:08:28.931116 | orchestrator | 2025-05-23 01:08:28 | INFO  | Task 19898d2f-a9f5-4e6a-a6d3-a5c54ef86506 is in state STARTED 2025-05-23 01:08:28.931140 | orchestrator | 2025-05-23 01:08:28 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:08:31.990282 | orchestrator | 2025-05-23 01:08:31 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:08:31.991862 | orchestrator | 2025-05-23 01:08:31 | INFO  | Task af18efb5-560f-4cd4-a303-0388cd55d1e4 is in state STARTED 2025-05-23 01:08:31.997158 | orchestrator | 2025-05-23 01:08:31 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:08:31.997188 | orchestrator | 2025-05-23 01:08:31 | INFO  | Task 19898d2f-a9f5-4e6a-a6d3-a5c54ef86506 is in state STARTED 2025-05-23 01:08:31.997201 | orchestrator | 2025-05-23 01:08:31 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:08:35.047254 | orchestrator | 2025-05-23 01:08:35 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:08:35.047630 | orchestrator | 2025-05-23 01:08:35 | INFO  | Task af18efb5-560f-4cd4-a303-0388cd55d1e4 is in state STARTED 2025-05-23 01:08:35.048422 | orchestrator | 2025-05-23 01:08:35 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:08:35.049347 | orchestrator | 2025-05-23 01:08:35 | INFO  | Task 19898d2f-a9f5-4e6a-a6d3-a5c54ef86506 is in state STARTED 2025-05-23 01:08:35.049455 | orchestrator | 2025-05-23 01:08:35 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:08:38.102485 | orchestrator | 2025-05-23 01:08:38 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:08:38.103612 | orchestrator | 2025-05-23 01:08:38 | INFO  | Task af18efb5-560f-4cd4-a303-0388cd55d1e4 is in state STARTED 2025-05-23 01:08:38.103644 | orchestrator | 2025-05-23 01:08:38 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:08:38.104395 | orchestrator | 2025-05-23 01:08:38 | INFO  | Task 19898d2f-a9f5-4e6a-a6d3-a5c54ef86506 is in state STARTED 2025-05-23 01:08:38.104418 | orchestrator | 2025-05-23 01:08:38 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:08:41.160614 | orchestrator | 2025-05-23 01:08:41 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:08:41.162061 | orchestrator | 2025-05-23 01:08:41 | INFO  | Task af18efb5-560f-4cd4-a303-0388cd55d1e4 is in state STARTED 2025-05-23 01:08:41.164008 | orchestrator | 2025-05-23 01:08:41 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:08:41.165682 | orchestrator | 2025-05-23 01:08:41 | INFO  | Task 19898d2f-a9f5-4e6a-a6d3-a5c54ef86506 is in state STARTED 2025-05-23 01:08:41.165939 | orchestrator | 2025-05-23 01:08:41 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:08:44.215917 | orchestrator | 2025-05-23 01:08:44 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:08:44.217279 | orchestrator | 2025-05-23 01:08:44 | INFO  | Task af18efb5-560f-4cd4-a303-0388cd55d1e4 is in state STARTED 2025-05-23 01:08:44.219350 | orchestrator | 2025-05-23 01:08:44 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:08:44.220635 | orchestrator | 2025-05-23 01:08:44 | INFO  | Task 19898d2f-a9f5-4e6a-a6d3-a5c54ef86506 is in state STARTED 2025-05-23 01:08:44.220756 | orchestrator | 2025-05-23 01:08:44 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:08:47.269499 | orchestrator | 2025-05-23 01:08:47 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:08:47.269916 | orchestrator | 2025-05-23 01:08:47 | INFO  | Task af18efb5-560f-4cd4-a303-0388cd55d1e4 is in state STARTED 2025-05-23 01:08:47.270826 | orchestrator | 2025-05-23 01:08:47 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:08:47.271958 | orchestrator | 2025-05-23 01:08:47 | INFO  | Task 19898d2f-a9f5-4e6a-a6d3-a5c54ef86506 is in state STARTED 2025-05-23 01:08:47.271981 | orchestrator | 2025-05-23 01:08:47 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:08:50.319968 | orchestrator | 2025-05-23 01:08:50 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:08:50.322091 | orchestrator | 2025-05-23 01:08:50 | INFO  | Task af18efb5-560f-4cd4-a303-0388cd55d1e4 is in state STARTED 2025-05-23 01:08:50.323630 | orchestrator | 2025-05-23 01:08:50 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:08:50.325325 | orchestrator | 2025-05-23 01:08:50 | INFO  | Task 19898d2f-a9f5-4e6a-a6d3-a5c54ef86506 is in state STARTED 2025-05-23 01:08:50.325369 | orchestrator | 2025-05-23 01:08:50 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:08:53.374783 | orchestrator | 2025-05-23 01:08:53 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:08:53.376195 | orchestrator | 2025-05-23 01:08:53 | INFO  | Task af18efb5-560f-4cd4-a303-0388cd55d1e4 is in state STARTED 2025-05-23 01:08:53.379115 | orchestrator | 2025-05-23 01:08:53 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:08:53.381269 | orchestrator | 2025-05-23 01:08:53 | INFO  | Task 19898d2f-a9f5-4e6a-a6d3-a5c54ef86506 is in state STARTED 2025-05-23 01:08:53.381312 | orchestrator | 2025-05-23 01:08:53 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:08:56.432283 | orchestrator | 2025-05-23 01:08:56 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:08:56.433674 | orchestrator | 2025-05-23 01:08:56 | INFO  | Task af18efb5-560f-4cd4-a303-0388cd55d1e4 is in state STARTED 2025-05-23 01:08:56.436307 | orchestrator | 2025-05-23 01:08:56 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:08:56.438800 | orchestrator | 2025-05-23 01:08:56 | INFO  | Task 19898d2f-a9f5-4e6a-a6d3-a5c54ef86506 is in state STARTED 2025-05-23 01:08:56.438859 | orchestrator | 2025-05-23 01:08:56 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:08:59.491519 | orchestrator | 2025-05-23 01:08:59 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:08:59.492457 | orchestrator | 2025-05-23 01:08:59 | INFO  | Task af18efb5-560f-4cd4-a303-0388cd55d1e4 is in state STARTED 2025-05-23 01:08:59.493666 | orchestrator | 2025-05-23 01:08:59 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:08:59.495411 | orchestrator | 2025-05-23 01:08:59 | INFO  | Task 19898d2f-a9f5-4e6a-a6d3-a5c54ef86506 is in state STARTED 2025-05-23 01:08:59.495455 | orchestrator | 2025-05-23 01:08:59 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:09:02.545809 | orchestrator | 2025-05-23 01:09:02 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:09:02.548821 | orchestrator | 2025-05-23 01:09:02 | INFO  | Task af18efb5-560f-4cd4-a303-0388cd55d1e4 is in state STARTED 2025-05-23 01:09:02.550471 | orchestrator | 2025-05-23 01:09:02 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:09:02.552090 | orchestrator | 2025-05-23 01:09:02 | INFO  | Task 19898d2f-a9f5-4e6a-a6d3-a5c54ef86506 is in state STARTED 2025-05-23 01:09:02.552115 | orchestrator | 2025-05-23 01:09:02 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:09:05.608120 | orchestrator | 2025-05-23 01:09:05 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:09:05.609467 | orchestrator | 2025-05-23 01:09:05 | INFO  | Task af18efb5-560f-4cd4-a303-0388cd55d1e4 is in state STARTED 2025-05-23 01:09:05.612017 | orchestrator | 2025-05-23 01:09:05 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:09:05.613538 | orchestrator | 2025-05-23 01:09:05 | INFO  | Task 19898d2f-a9f5-4e6a-a6d3-a5c54ef86506 is in state STARTED 2025-05-23 01:09:05.613756 | orchestrator | 2025-05-23 01:09:05 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:09:08.672278 | orchestrator | 2025-05-23 01:09:08 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:09:08.676778 | orchestrator | 2025-05-23 01:09:08 | INFO  | Task af18efb5-560f-4cd4-a303-0388cd55d1e4 is in state STARTED 2025-05-23 01:09:08.676888 | orchestrator | 2025-05-23 01:09:08 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:09:08.678606 | orchestrator | 2025-05-23 01:09:08 | INFO  | Task 19898d2f-a9f5-4e6a-a6d3-a5c54ef86506 is in state STARTED 2025-05-23 01:09:08.678781 | orchestrator | 2025-05-23 01:09:08 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:09:11.723955 | orchestrator | 2025-05-23 01:09:11 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:09:11.725908 | orchestrator | 2025-05-23 01:09:11 | INFO  | Task af18efb5-560f-4cd4-a303-0388cd55d1e4 is in state STARTED 2025-05-23 01:09:11.727443 | orchestrator | 2025-05-23 01:09:11 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:09:11.729094 | orchestrator | 2025-05-23 01:09:11 | INFO  | Task 19898d2f-a9f5-4e6a-a6d3-a5c54ef86506 is in state SUCCESS 2025-05-23 01:09:11.729121 | orchestrator | 2025-05-23 01:09:11 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:09:14.784833 | orchestrator | 2025-05-23 01:09:14 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:09:14.788070 | orchestrator | 2025-05-23 01:09:14 | INFO  | Task af18efb5-560f-4cd4-a303-0388cd55d1e4 is in state STARTED 2025-05-23 01:09:14.791148 | orchestrator | 2025-05-23 01:09:14 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:09:14.791194 | orchestrator | 2025-05-23 01:09:14 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:09:17.843819 | orchestrator | 2025-05-23 01:09:17 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:09:17.846096 | orchestrator | 2025-05-23 01:09:17 | INFO  | Task af18efb5-560f-4cd4-a303-0388cd55d1e4 is in state STARTED 2025-05-23 01:09:17.846429 | orchestrator | 2025-05-23 01:09:17 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:09:17.846621 | orchestrator | 2025-05-23 01:09:17 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:09:20.884935 | orchestrator | 2025-05-23 01:09:20 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:09:20.885467 | orchestrator | 2025-05-23 01:09:20 | INFO  | Task af18efb5-560f-4cd4-a303-0388cd55d1e4 is in state STARTED 2025-05-23 01:09:20.886778 | orchestrator | 2025-05-23 01:09:20 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:09:20.886855 | orchestrator | 2025-05-23 01:09:20 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:09:23.930292 | orchestrator | 2025-05-23 01:09:23 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:09:23.932772 | orchestrator | 2025-05-23 01:09:23 | INFO  | Task af18efb5-560f-4cd4-a303-0388cd55d1e4 is in state SUCCESS 2025-05-23 01:09:23.934151 | orchestrator | 2025-05-23 01:09:23.934189 | orchestrator | 2025-05-23 01:09:23.934263 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-23 01:09:23.934276 | orchestrator | 2025-05-23 01:09:23.934288 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-23 01:09:23.934300 | orchestrator | Friday 23 May 2025 01:07:22 +0000 (0:00:00.325) 0:00:00.325 ************ 2025-05-23 01:09:23.934371 | orchestrator | ok: [testbed-node-0] 2025-05-23 01:09:23.934386 | orchestrator | ok: [testbed-node-1] 2025-05-23 01:09:23.934397 | orchestrator | ok: [testbed-node-2] 2025-05-23 01:09:23.934427 | orchestrator | 2025-05-23 01:09:23.934439 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-23 01:09:23.934462 | orchestrator | Friday 23 May 2025 01:07:23 +0000 (0:00:00.370) 0:00:00.696 ************ 2025-05-23 01:09:23.934473 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-05-23 01:09:23.934485 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-05-23 01:09:23.934496 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-05-23 01:09:23.934561 | orchestrator | 2025-05-23 01:09:23.934572 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-05-23 01:09:23.934583 | orchestrator | 2025-05-23 01:09:23.934594 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-23 01:09:23.934605 | orchestrator | Friday 23 May 2025 01:07:23 +0000 (0:00:00.332) 0:00:01.028 ************ 2025-05-23 01:09:23.934615 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 01:09:23.934627 | orchestrator | 2025-05-23 01:09:23.934638 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-05-23 01:09:23.934650 | orchestrator | Friday 23 May 2025 01:07:24 +0000 (0:00:00.837) 0:00:01.866 ************ 2025-05-23 01:09:23.934662 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-05-23 01:09:23.934702 | orchestrator | 2025-05-23 01:09:23.934713 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-05-23 01:09:23.934724 | orchestrator | Friday 23 May 2025 01:07:27 +0000 (0:00:03.359) 0:00:05.226 ************ 2025-05-23 01:09:23.934734 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-05-23 01:09:23.934760 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-05-23 01:09:23.934773 | orchestrator | 2025-05-23 01:09:23.934786 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-05-23 01:09:23.934798 | orchestrator | Friday 23 May 2025 01:07:34 +0000 (0:00:06.471) 0:00:11.697 ************ 2025-05-23 01:09:23.934811 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-23 01:09:23.934822 | orchestrator | 2025-05-23 01:09:23.934835 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-05-23 01:09:23.934847 | orchestrator | Friday 23 May 2025 01:07:37 +0000 (0:00:03.347) 0:00:15.044 ************ 2025-05-23 01:09:23.934859 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-23 01:09:23.934872 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-05-23 01:09:23.934885 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-05-23 01:09:23.934898 | orchestrator | 2025-05-23 01:09:23.934911 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-05-23 01:09:23.934943 | orchestrator | Friday 23 May 2025 01:07:45 +0000 (0:00:07.918) 0:00:22.963 ************ 2025-05-23 01:09:23.934956 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-23 01:09:23.934968 | orchestrator | 2025-05-23 01:09:23.934980 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-05-23 01:09:23.934992 | orchestrator | Friday 23 May 2025 01:07:48 +0000 (0:00:03.117) 0:00:26.081 ************ 2025-05-23 01:09:23.935005 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-05-23 01:09:23.935018 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-05-23 01:09:23.935030 | orchestrator | 2025-05-23 01:09:23.935042 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-05-23 01:09:23.935054 | orchestrator | Friday 23 May 2025 01:07:56 +0000 (0:00:07.773) 0:00:33.854 ************ 2025-05-23 01:09:23.935066 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-05-23 01:09:23.935088 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-05-23 01:09:23.935101 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-05-23 01:09:23.935114 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-05-23 01:09:23.935127 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-05-23 01:09:23.935140 | orchestrator | 2025-05-23 01:09:23.935151 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-23 01:09:23.935162 | orchestrator | Friday 23 May 2025 01:08:11 +0000 (0:00:15.235) 0:00:49.090 ************ 2025-05-23 01:09:23.935173 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 01:09:23.935183 | orchestrator | 2025-05-23 01:09:23.935194 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-05-23 01:09:23.935204 | orchestrator | Friday 23 May 2025 01:08:12 +0000 (0:00:00.998) 0:00:50.089 ************ 2025-05-23 01:09:23.935232 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"action": "os_nova_flavor", "changed": false, "extra_data": {"data": null, "details": "503 Service Unavailable: No server is available to handle this request.: ", "response": "

503 Service Unavailable

\nNo server is available to handle this request.\n\n"}, "msg": "HttpException: 503: Server Error for url: https://api-int.testbed.osism.xyz:8774/v2.1/flavors/amphora, 503 Service Unavailable: No server is available to handle this request.: "} 2025-05-23 01:09:23.935248 | orchestrator | 2025-05-23 01:09:23.935259 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 01:09:23.935271 | orchestrator | testbed-node-0 : ok=11  changed=5  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-05-23 01:09:23.935283 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 01:09:23.935295 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 01:09:23.935306 | orchestrator | 2025-05-23 01:09:23.935316 | orchestrator | 2025-05-23 01:09:23.935327 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-23 01:09:23.935338 | orchestrator | Friday 23 May 2025 01:08:16 +0000 (0:00:03.366) 0:00:53.456 ************ 2025-05-23 01:09:23.935348 | orchestrator | =============================================================================== 2025-05-23 01:09:23.935359 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.24s 2025-05-23 01:09:23.935370 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 7.92s 2025-05-23 01:09:23.935381 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.77s 2025-05-23 01:09:23.935392 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.47s 2025-05-23 01:09:23.935402 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 3.37s 2025-05-23 01:09:23.935413 | orchestrator | service-ks-register : octavia | Creating services ----------------------- 3.36s 2025-05-23 01:09:23.935424 | orchestrator | service-ks-register : octavia | Creating projects ----------------------- 3.35s 2025-05-23 01:09:23.935434 | orchestrator | service-ks-register : octavia | Creating roles -------------------------- 3.12s 2025-05-23 01:09:23.935450 | orchestrator | octavia : include_tasks ------------------------------------------------- 1.00s 2025-05-23 01:09:23.935461 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.84s 2025-05-23 01:09:23.935472 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.37s 2025-05-23 01:09:23.935483 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.33s 2025-05-23 01:09:23.935493 | orchestrator | 2025-05-23 01:09:23.935504 | orchestrator | 2025-05-23 01:09:23.935515 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-23 01:09:23.935532 | orchestrator | 2025-05-23 01:09:23.935543 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-23 01:09:23.935553 | orchestrator | Friday 23 May 2025 01:06:46 +0000 (0:00:00.213) 0:00:00.213 ************ 2025-05-23 01:09:23.935564 | orchestrator | ok: [testbed-node-0] 2025-05-23 01:09:23.935575 | orchestrator | ok: [testbed-node-1] 2025-05-23 01:09:23.935585 | orchestrator | ok: [testbed-node-2] 2025-05-23 01:09:23.935596 | orchestrator | 2025-05-23 01:09:23.935607 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-23 01:09:23.935618 | orchestrator | Friday 23 May 2025 01:06:47 +0000 (0:00:00.380) 0:00:00.594 ************ 2025-05-23 01:09:23.935629 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-05-23 01:09:23.935640 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-05-23 01:09:23.935650 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-05-23 01:09:23.935661 | orchestrator | 2025-05-23 01:09:23.935703 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-05-23 01:09:23.935724 | orchestrator | 2025-05-23 01:09:23.935742 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-05-23 01:09:23.935755 | orchestrator | Friday 23 May 2025 01:06:47 +0000 (0:00:00.583) 0:00:01.178 ************ 2025-05-23 01:09:23.935766 | orchestrator | ok: [testbed-node-0] 2025-05-23 01:09:23.935777 | orchestrator | ok: [testbed-node-2] 2025-05-23 01:09:23.935787 | orchestrator | ok: [testbed-node-1] 2025-05-23 01:09:23.935798 | orchestrator | 2025-05-23 01:09:23.935809 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 01:09:23.935820 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 01:09:23.935831 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 01:09:23.935841 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 01:09:23.935852 | orchestrator | 2025-05-23 01:09:23.935862 | orchestrator | 2025-05-23 01:09:23.935873 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-23 01:09:23.935884 | orchestrator | Friday 23 May 2025 01:09:08 +0000 (0:02:20.896) 0:02:22.074 ************ 2025-05-23 01:09:23.935894 | orchestrator | =============================================================================== 2025-05-23 01:09:23.935905 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 140.90s 2025-05-23 01:09:23.935915 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.58s 2025-05-23 01:09:23.935926 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.38s 2025-05-23 01:09:23.935937 | orchestrator | 2025-05-23 01:09:23.935947 | orchestrator | 2025-05-23 01:09:23.935958 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-23 01:09:23.935968 | orchestrator | 2025-05-23 01:09:23.935979 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-23 01:09:23.935997 | orchestrator | Friday 23 May 2025 01:07:33 +0000 (0:00:00.303) 0:00:00.303 ************ 2025-05-23 01:09:23.936008 | orchestrator | ok: [testbed-node-0] 2025-05-23 01:09:23.936019 | orchestrator | ok: [testbed-node-1] 2025-05-23 01:09:23.936030 | orchestrator | ok: [testbed-node-2] 2025-05-23 01:09:23.936040 | orchestrator | 2025-05-23 01:09:23.936051 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-23 01:09:23.936062 | orchestrator | Friday 23 May 2025 01:07:34 +0000 (0:00:00.427) 0:00:00.731 ************ 2025-05-23 01:09:23.936073 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-05-23 01:09:23.936083 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-05-23 01:09:23.936094 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-05-23 01:09:23.936105 | orchestrator | 2025-05-23 01:09:23.936123 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-05-23 01:09:23.936133 | orchestrator | 2025-05-23 01:09:23.936144 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-05-23 01:09:23.936155 | orchestrator | Friday 23 May 2025 01:07:34 +0000 (0:00:00.310) 0:00:01.041 ************ 2025-05-23 01:09:23.936165 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 01:09:23.936176 | orchestrator | 2025-05-23 01:09:23.936187 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-05-23 01:09:23.936197 | orchestrator | Friday 23 May 2025 01:07:35 +0000 (0:00:00.745) 0:00:01.787 ************ 2025-05-23 01:09:23.936215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-23 01:09:23.936229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-23 01:09:23.936241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-23 01:09:23.936252 | orchestrator | 2025-05-23 01:09:23.936263 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-05-23 01:09:23.936274 | orchestrator | Friday 23 May 2025 01:07:36 +0000 (0:00:00.819) 0:00:02.606 ************ 2025-05-23 01:09:23.936284 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-05-23 01:09:23.936295 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-05-23 01:09:23.936305 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-23 01:09:23.936316 | orchestrator | 2025-05-23 01:09:23.936327 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-05-23 01:09:23.936337 | orchestrator | Friday 23 May 2025 01:07:36 +0000 (0:00:00.523) 0:00:03.129 ************ 2025-05-23 01:09:23.936348 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 01:09:23.936359 | orchestrator | 2025-05-23 01:09:23.936370 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-05-23 01:09:23.936380 | orchestrator | Friday 23 May 2025 01:07:37 +0000 (0:00:00.621) 0:00:03.751 ************ 2025-05-23 01:09:23.936399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-23 01:09:23.936419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-23 01:09:23.936435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-23 01:09:23.936446 | orchestrator | 2025-05-23 01:09:23.936457 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-05-23 01:09:23.936468 | orchestrator | Friday 23 May 2025 01:07:38 +0000 (0:00:01.394) 0:00:05.145 ************ 2025-05-23 01:09:23.936479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-23 01:09:23.936490 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:09:23.936502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-23 01:09:23.936513 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:09:23.936531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-23 01:09:23.936550 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:09:23.936561 | orchestrator | 2025-05-23 01:09:23.936571 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-05-23 01:09:23.936582 | orchestrator | Friday 23 May 2025 01:07:39 +0000 (0:00:00.693) 0:00:05.838 ************ 2025-05-23 01:09:23.936593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-23 01:09:23.936604 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:09:23.936619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-23 01:09:23.936631 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:09:23.936642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-23 01:09:23.936653 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:09:23.936729 | orchestrator | 2025-05-23 01:09:23.936744 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-05-23 01:09:23.936755 | orchestrator | Friday 23 May 2025 01:07:40 +0000 (0:00:00.692) 0:00:06.531 ************ 2025-05-23 01:09:23.936767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-23 01:09:23.936786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-23 01:09:23.936806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-23 01:09:23.936818 | orchestrator | 2025-05-23 01:09:23.936829 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-05-23 01:09:23.936840 | orchestrator | Friday 23 May 2025 01:07:41 +0000 (0:00:01.361) 0:00:07.893 ************ 2025-05-23 01:09:23.936851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-23 01:09:23.936868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-23 01:09:23.936880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-23 01:09:23.936891 | orchestrator | 2025-05-23 01:09:23.936902 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-05-23 01:09:23.936912 | orchestrator | Friday 23 May 2025 01:07:43 +0000 (0:00:01.775) 0:00:09.668 ************ 2025-05-23 01:09:23.936930 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:09:23.936941 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:09:23.936952 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:09:23.936963 | orchestrator | 2025-05-23 01:09:23.936974 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-05-23 01:09:23.936984 | orchestrator | Friday 23 May 2025 01:07:43 +0000 (0:00:00.288) 0:00:09.957 ************ 2025-05-23 01:09:23.936995 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-23 01:09:23.937006 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-23 01:09:23.937016 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-23 01:09:23.937027 | orchestrator | 2025-05-23 01:09:23.937038 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-05-23 01:09:23.937048 | orchestrator | Friday 23 May 2025 01:07:45 +0000 (0:00:01.438) 0:00:11.395 ************ 2025-05-23 01:09:23.937059 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-23 01:09:23.937070 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-23 01:09:23.937081 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-23 01:09:23.937092 | orchestrator | 2025-05-23 01:09:23.937109 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-05-23 01:09:23.937120 | orchestrator | Friday 23 May 2025 01:07:46 +0000 (0:00:01.422) 0:00:12.818 ************ 2025-05-23 01:09:23.937131 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-23 01:09:23.937142 | orchestrator | 2025-05-23 01:09:23.937151 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-05-23 01:09:23.937161 | orchestrator | Friday 23 May 2025 01:07:46 +0000 (0:00:00.452) 0:00:13.271 ************ 2025-05-23 01:09:23.937170 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-05-23 01:09:23.937180 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-05-23 01:09:23.937189 | orchestrator | ok: [testbed-node-0] 2025-05-23 01:09:23.937199 | orchestrator | ok: [testbed-node-1] 2025-05-23 01:09:23.937209 | orchestrator | ok: [testbed-node-2] 2025-05-23 01:09:23.937218 | orchestrator | 2025-05-23 01:09:23.937228 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-05-23 01:09:23.937238 | orchestrator | Friday 23 May 2025 01:07:47 +0000 (0:00:00.872) 0:00:14.143 ************ 2025-05-23 01:09:23.937247 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:09:23.937257 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:09:23.937267 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:09:23.937276 | orchestrator | 2025-05-23 01:09:23.937286 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-05-23 01:09:23.937295 | orchestrator | Friday 23 May 2025 01:07:48 +0000 (0:00:00.479) 0:00:14.623 ************ 2025-05-23 01:09:23.937315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1096100, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.000489, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.937326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1096100, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.000489, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.937343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1096100, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.000489, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.937353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1096080, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959253.9214876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.937368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1096080, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959253.9214876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.937379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1096080, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959253.9214876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.937389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1096055, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959253.9194877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.937403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1096055, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959253.9194877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.937419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1096055, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959253.9194877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.937430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1096089, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959253.9224877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.937440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1096089, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959253.9224877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.937716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1096089, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959253.9224877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.937734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1096034, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959253.9154875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.937749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1096034, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959253.9154875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.937767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1096034, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959253.9154875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.937776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1096066, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959253.9194877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.937786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1096066, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959253.9194877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.937803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1096066, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959253.9194877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.937814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1096087, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959253.9214876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.937825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1096087, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959253.9214876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.937845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1096087, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959253.9214876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.937856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1096030, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959253.9144876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.937866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1096030, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959253.9144876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.937881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1096030, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959253.9144876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.937892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1095976, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959253.9004874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.937902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1095976, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959253.9004874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.937922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1095976, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959253.9004874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.937933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1096038, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959253.9154875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.937943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1096038, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959253.9154875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.937953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1096038, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959253.9154875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.937970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1095984, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959253.9024873, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.937981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1095984, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959253.9024873, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.938000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1095984, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959253.9024873, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.938010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1096084, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747959253.9214876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.938050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1096084, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747959253.9214876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.938060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1096084, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747959253.9214876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.938076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1096041, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747959253.9164877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.938087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1096041, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747959253.9164877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.938103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1096041, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747959253.9164877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.938118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1096094, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959253.9234877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.938128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1096094, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959253.9234877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.938139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1096094, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959253.9234877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.938149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1096022, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959253.9144876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.938164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1096022, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959253.9144876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.938515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1096022, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959253.9144876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.938536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1096071, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959253.9204876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.938547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1096071, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959253.9204876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.938557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1096071, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959253.9204876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.938567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1095977, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959253.9014874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.938585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1095977, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959253.9014874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.938601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1095977, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959253.9014874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.938616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1095987, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959253.9034874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.938626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1095987, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959253.9034874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.938636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1095987, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959253.9034874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.938646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1096047, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959253.9174876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.938662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1096047, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959253.9174876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.938695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1096047, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959253.9174876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.938713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1096457, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0334895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.938728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1096457, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0334895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.938739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1096457, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0334895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.938749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1096446, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0194893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.938759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1096446, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0194893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.938791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1096446, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0194893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.938809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1096524, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0414896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.938830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1096524, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0414896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.938844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1096524, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0414896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.938859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1096380, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.001489, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.938875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1096380, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.001489, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.938908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1096380, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.001489, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.938925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1096537, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0434895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.938948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1096537, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0434895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.938965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1096537, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0434895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.938981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1096509, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0354893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.938998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1096509, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0354893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.939036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1096509, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0354893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.939053 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1096519, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0364895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.939075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1096519, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0364895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.939091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1096519, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0364895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.939109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1096386, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0024889, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.939127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1096386, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0024889, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.939165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1096386, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0024889, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.939183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1096450, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0204892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.939208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1096450, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0204892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.939225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1096450, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0204892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.939242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1096544, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0444896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.939259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1096544, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0444896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.939293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1096544, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0444896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.939310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1096522, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747959254.0374894, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.939328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1096522, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747959254.0374894, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.939353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1096522, 'dev': 173, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747959254.0374894, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.939371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1096388, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.003489, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.939389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1096388, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.003489, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.939425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1096388, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.003489, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.939456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1096387, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0024889, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.939474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1096387, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0024889, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.939496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1096387, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0024889, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.939515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1096394, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0054889, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.939533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1096394, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0054889, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.939560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1096394, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0054889, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.939587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1096398, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0174892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.939605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1096398, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0174892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.939629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1096398, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0174892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.939647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1096551, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0494897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.939693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1096551, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0494897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.939721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1096551, 'dev': 173, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747959254.0494897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-23 01:09:23.939739 | orchestrator | 2025-05-23 01:09:23.939757 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-05-23 01:09:23.939775 | orchestrator | Friday 23 May 2025 01:08:21 +0000 (0:00:33.092) 0:00:47.715 ************ 2025-05-23 01:09:23.939802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-23 01:09:23.939820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-23 01:09:23.939844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-23 01:09:23.939862 | orchestrator | 2025-05-23 01:09:23.939879 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-05-23 01:09:23.939896 | orchestrator | Friday 23 May 2025 01:08:22 +0000 (0:00:01.047) 0:00:48.762 ************ 2025-05-23 01:09:23.939912 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:09:23.939929 | orchestrator | 2025-05-23 01:09:23.939945 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-05-23 01:09:23.939962 | orchestrator | Friday 23 May 2025 01:08:25 +0000 (0:00:02.663) 0:00:51.426 ************ 2025-05-23 01:09:23.939978 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:09:23.939994 | orchestrator | 2025-05-23 01:09:23.940010 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-23 01:09:23.940026 | orchestrator | Friday 23 May 2025 01:08:27 +0000 (0:00:02.250) 0:00:53.677 ************ 2025-05-23 01:09:23.940043 | orchestrator | 2025-05-23 01:09:23.940068 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-23 01:09:23.940086 | orchestrator | Friday 23 May 2025 01:08:27 +0000 (0:00:00.069) 0:00:53.746 ************ 2025-05-23 01:09:23.940101 | orchestrator | 2025-05-23 01:09:23.940118 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-23 01:09:23.940134 | orchestrator | Friday 23 May 2025 01:08:27 +0000 (0:00:00.056) 0:00:53.802 ************ 2025-05-23 01:09:23.940150 | orchestrator | 2025-05-23 01:09:23.940166 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-05-23 01:09:23.940182 | orchestrator | Friday 23 May 2025 01:08:27 +0000 (0:00:00.210) 0:00:54.012 ************ 2025-05-23 01:09:23.940197 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:09:23.940214 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:09:23.940229 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:09:23.940246 | orchestrator | 2025-05-23 01:09:23.940261 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-05-23 01:09:23.940278 | orchestrator | Friday 23 May 2025 01:08:34 +0000 (0:00:07.191) 0:01:01.204 ************ 2025-05-23 01:09:23.940294 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:09:23.940309 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:09:23.940326 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-05-23 01:09:23.940342 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-05-23 01:09:23.940357 | orchestrator | ok: [testbed-node-0] 2025-05-23 01:09:23.940372 | orchestrator | 2025-05-23 01:09:23.940389 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-05-23 01:09:23.940405 | orchestrator | Friday 23 May 2025 01:09:01 +0000 (0:00:26.611) 0:01:27.816 ************ 2025-05-23 01:09:23.940420 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:09:23.940436 | orchestrator | changed: [testbed-node-2] 2025-05-23 01:09:23.940452 | orchestrator | changed: [testbed-node-1] 2025-05-23 01:09:23.940467 | orchestrator | 2025-05-23 01:09:23.940483 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-05-23 01:09:23.940498 | orchestrator | Friday 23 May 2025 01:09:16 +0000 (0:00:14.629) 0:01:42.445 ************ 2025-05-23 01:09:23.940514 | orchestrator | ok: [testbed-node-0] 2025-05-23 01:09:23.940530 | orchestrator | 2025-05-23 01:09:23.940545 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-05-23 01:09:23.940560 | orchestrator | Friday 23 May 2025 01:09:18 +0000 (0:00:02.363) 0:01:44.809 ************ 2025-05-23 01:09:23.940577 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:09:23.940605 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:09:23.940622 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:09:23.940638 | orchestrator | 2025-05-23 01:09:23.940654 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-05-23 01:09:23.940713 | orchestrator | Friday 23 May 2025 01:09:19 +0000 (0:00:00.618) 0:01:45.427 ************ 2025-05-23 01:09:23.940731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-05-23 01:09:23.940750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-05-23 01:09:23.940769 | orchestrator | 2025-05-23 01:09:23.940786 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-05-23 01:09:23.940802 | orchestrator | Friday 23 May 2025 01:09:21 +0000 (0:00:02.344) 0:01:47.772 ************ 2025-05-23 01:09:23.940819 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:09:23.940850 | orchestrator | 2025-05-23 01:09:23.940868 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 01:09:23.940886 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-23 01:09:23.940904 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-23 01:09:23.940929 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-23 01:09:23.940947 | orchestrator | 2025-05-23 01:09:23.940965 | orchestrator | 2025-05-23 01:09:23.940983 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-23 01:09:23.941001 | orchestrator | Friday 23 May 2025 01:09:21 +0000 (0:00:00.389) 0:01:48.161 ************ 2025-05-23 01:09:23.941019 | orchestrator | =============================================================================== 2025-05-23 01:09:23.941037 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 33.09s 2025-05-23 01:09:23.941053 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 26.61s 2025-05-23 01:09:23.941070 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 14.63s 2025-05-23 01:09:23.941087 | orchestrator | grafana : Restart first grafana container ------------------------------- 7.19s 2025-05-23 01:09:23.941104 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.66s 2025-05-23 01:09:23.941119 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.36s 2025-05-23 01:09:23.941135 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.34s 2025-05-23 01:09:23.941149 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.25s 2025-05-23 01:09:23.941164 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.78s 2025-05-23 01:09:23.941179 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.44s 2025-05-23 01:09:23.941196 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.42s 2025-05-23 01:09:23.941212 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.39s 2025-05-23 01:09:23.941229 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.36s 2025-05-23 01:09:23.941245 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.05s 2025-05-23 01:09:23.941261 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.87s 2025-05-23 01:09:23.941278 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.82s 2025-05-23 01:09:23.941293 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.75s 2025-05-23 01:09:23.941310 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS certificate --- 0.69s 2025-05-23 01:09:23.941329 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.69s 2025-05-23 01:09:23.941347 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.62s 2025-05-23 01:09:23.941364 | orchestrator | 2025-05-23 01:09:23 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:09:23.941380 | orchestrator | 2025-05-23 01:09:23 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:09:26.978292 | orchestrator | 2025-05-23 01:09:26 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:09:26.978394 | orchestrator | 2025-05-23 01:09:26 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:09:26.978409 | orchestrator | 2025-05-23 01:09:26 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:09:30.040909 | orchestrator | 2025-05-23 01:09:30 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:09:30.043805 | orchestrator | 2025-05-23 01:09:30 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:09:30.044384 | orchestrator | 2025-05-23 01:09:30 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:09:33.103745 | orchestrator | 2025-05-23 01:09:33 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:09:33.103845 | orchestrator | 2025-05-23 01:09:33 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:09:33.103860 | orchestrator | 2025-05-23 01:09:33 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:09:36.148706 | orchestrator | 2025-05-23 01:09:36 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:09:36.149240 | orchestrator | 2025-05-23 01:09:36 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:09:36.149276 | orchestrator | 2025-05-23 01:09:36 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:09:39.198425 | orchestrator | 2025-05-23 01:09:39 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:09:39.207380 | orchestrator | 2025-05-23 01:09:39 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:09:39.207420 | orchestrator | 2025-05-23 01:09:39 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:09:42.263368 | orchestrator | 2025-05-23 01:09:42 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:09:42.264094 | orchestrator | 2025-05-23 01:09:42 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:09:42.264149 | orchestrator | 2025-05-23 01:09:42 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:09:45.317083 | orchestrator | 2025-05-23 01:09:45 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:09:45.318942 | orchestrator | 2025-05-23 01:09:45 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:09:45.318991 | orchestrator | 2025-05-23 01:09:45 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:09:48.388718 | orchestrator | 2025-05-23 01:09:48 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:09:48.390918 | orchestrator | 2025-05-23 01:09:48 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:09:48.390966 | orchestrator | 2025-05-23 01:09:48 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:09:51.449253 | orchestrator | 2025-05-23 01:09:51 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:09:51.450415 | orchestrator | 2025-05-23 01:09:51 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:09:51.450513 | orchestrator | 2025-05-23 01:09:51 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:09:54.501606 | orchestrator | 2025-05-23 01:09:54 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:09:54.502307 | orchestrator | 2025-05-23 01:09:54 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:09:54.502347 | orchestrator | 2025-05-23 01:09:54 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:09:57.567587 | orchestrator | 2025-05-23 01:09:57 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:09:57.569893 | orchestrator | 2025-05-23 01:09:57 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:09:57.569955 | orchestrator | 2025-05-23 01:09:57 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:10:00.622206 | orchestrator | 2025-05-23 01:10:00 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:10:00.622484 | orchestrator | 2025-05-23 01:10:00 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:10:00.622583 | orchestrator | 2025-05-23 01:10:00 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:10:03.666736 | orchestrator | 2025-05-23 01:10:03 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:10:03.671858 | orchestrator | 2025-05-23 01:10:03 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:10:03.671915 | orchestrator | 2025-05-23 01:10:03 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:10:06.727122 | orchestrator | 2025-05-23 01:10:06 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:10:06.728230 | orchestrator | 2025-05-23 01:10:06 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:10:06.728267 | orchestrator | 2025-05-23 01:10:06 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:10:09.779464 | orchestrator | 2025-05-23 01:10:09 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:10:09.779583 | orchestrator | 2025-05-23 01:10:09 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:10:09.779608 | orchestrator | 2025-05-23 01:10:09 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:10:12.814761 | orchestrator | 2025-05-23 01:10:12 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:10:12.814874 | orchestrator | 2025-05-23 01:10:12 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:10:12.814891 | orchestrator | 2025-05-23 01:10:12 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:10:15.859350 | orchestrator | 2025-05-23 01:10:15 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:10:15.861922 | orchestrator | 2025-05-23 01:10:15 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:10:15.861968 | orchestrator | 2025-05-23 01:10:15 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:10:18.925199 | orchestrator | 2025-05-23 01:10:18 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:10:18.925323 | orchestrator | 2025-05-23 01:10:18 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:10:18.925741 | orchestrator | 2025-05-23 01:10:18 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:10:21.976853 | orchestrator | 2025-05-23 01:10:21 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:10:21.977111 | orchestrator | 2025-05-23 01:10:21 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:10:21.977131 | orchestrator | 2025-05-23 01:10:21 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:10:25.046813 | orchestrator | 2025-05-23 01:10:25 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:10:25.046921 | orchestrator | 2025-05-23 01:10:25 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:10:25.046936 | orchestrator | 2025-05-23 01:10:25 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:10:28.091417 | orchestrator | 2025-05-23 01:10:28 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:10:28.091520 | orchestrator | 2025-05-23 01:10:28 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:10:28.091565 | orchestrator | 2025-05-23 01:10:28 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:10:31.138264 | orchestrator | 2025-05-23 01:10:31 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:10:31.139199 | orchestrator | 2025-05-23 01:10:31 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:10:31.139459 | orchestrator | 2025-05-23 01:10:31 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:10:34.204250 | orchestrator | 2025-05-23 01:10:34 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:10:34.205721 | orchestrator | 2025-05-23 01:10:34 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:10:34.205754 | orchestrator | 2025-05-23 01:10:34 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:10:37.253390 | orchestrator | 2025-05-23 01:10:37 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:10:37.255152 | orchestrator | 2025-05-23 01:10:37 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:10:37.255208 | orchestrator | 2025-05-23 01:10:37 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:10:40.293726 | orchestrator | 2025-05-23 01:10:40 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:10:40.298815 | orchestrator | 2025-05-23 01:10:40 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:10:40.298860 | orchestrator | 2025-05-23 01:10:40 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:10:43.336164 | orchestrator | 2025-05-23 01:10:43 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:10:43.336723 | orchestrator | 2025-05-23 01:10:43 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:10:43.337924 | orchestrator | 2025-05-23 01:10:43 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:10:46.394898 | orchestrator | 2025-05-23 01:10:46 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:10:46.396899 | orchestrator | 2025-05-23 01:10:46 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:10:46.397120 | orchestrator | 2025-05-23 01:10:46 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:10:49.454874 | orchestrator | 2025-05-23 01:10:49 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:10:49.456573 | orchestrator | 2025-05-23 01:10:49 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:10:49.456985 | orchestrator | 2025-05-23 01:10:49 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:10:52.507357 | orchestrator | 2025-05-23 01:10:52 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:10:52.507462 | orchestrator | 2025-05-23 01:10:52 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:10:52.507476 | orchestrator | 2025-05-23 01:10:52 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:10:55.562625 | orchestrator | 2025-05-23 01:10:55 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:10:55.563770 | orchestrator | 2025-05-23 01:10:55 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:10:55.563804 | orchestrator | 2025-05-23 01:10:55 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:10:58.633179 | orchestrator | 2025-05-23 01:10:58 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:10:58.633319 | orchestrator | 2025-05-23 01:10:58 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:10:58.633897 | orchestrator | 2025-05-23 01:10:58 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:11:01.675883 | orchestrator | 2025-05-23 01:11:01 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:11:01.677031 | orchestrator | 2025-05-23 01:11:01 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:11:01.677144 | orchestrator | 2025-05-23 01:11:01 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:11:04.728786 | orchestrator | 2025-05-23 01:11:04 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:11:04.731218 | orchestrator | 2025-05-23 01:11:04 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:11:04.731259 | orchestrator | 2025-05-23 01:11:04 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:11:07.778493 | orchestrator | 2025-05-23 01:11:07 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:11:07.779447 | orchestrator | 2025-05-23 01:11:07 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:11:07.779479 | orchestrator | 2025-05-23 01:11:07 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:11:10.839384 | orchestrator | 2025-05-23 01:11:10 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:11:10.840396 | orchestrator | 2025-05-23 01:11:10 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:11:10.840744 | orchestrator | 2025-05-23 01:11:10 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:11:13.896927 | orchestrator | 2025-05-23 01:11:13 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:11:13.898398 | orchestrator | 2025-05-23 01:11:13 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:11:13.898435 | orchestrator | 2025-05-23 01:11:13 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:11:16.949938 | orchestrator | 2025-05-23 01:11:16 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:11:16.950224 | orchestrator | 2025-05-23 01:11:16 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:11:16.950248 | orchestrator | 2025-05-23 01:11:16 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:11:20.012210 | orchestrator | 2025-05-23 01:11:20 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:11:20.013200 | orchestrator | 2025-05-23 01:11:20 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:11:20.013236 | orchestrator | 2025-05-23 01:11:20 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:11:23.065211 | orchestrator | 2025-05-23 01:11:23 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:11:23.068315 | orchestrator | 2025-05-23 01:11:23 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:11:23.068363 | orchestrator | 2025-05-23 01:11:23 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:11:26.111687 | orchestrator | 2025-05-23 01:11:26 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:11:26.111819 | orchestrator | 2025-05-23 01:11:26 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:11:26.111846 | orchestrator | 2025-05-23 01:11:26 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:11:29.161757 | orchestrator | 2025-05-23 01:11:29 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:11:29.163269 | orchestrator | 2025-05-23 01:11:29 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:11:29.163308 | orchestrator | 2025-05-23 01:11:29 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:11:32.215741 | orchestrator | 2025-05-23 01:11:32 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:11:32.216521 | orchestrator | 2025-05-23 01:11:32 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:11:32.216567 | orchestrator | 2025-05-23 01:11:32 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:11:35.266452 | orchestrator | 2025-05-23 01:11:35 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:11:35.267169 | orchestrator | 2025-05-23 01:11:35 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:11:35.267205 | orchestrator | 2025-05-23 01:11:35 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:11:38.307861 | orchestrator | 2025-05-23 01:11:38 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:11:38.309612 | orchestrator | 2025-05-23 01:11:38 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:11:38.309693 | orchestrator | 2025-05-23 01:11:38 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:11:41.363710 | orchestrator | 2025-05-23 01:11:41 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:11:41.367142 | orchestrator | 2025-05-23 01:11:41 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:11:41.367178 | orchestrator | 2025-05-23 01:11:41 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:11:44.409711 | orchestrator | 2025-05-23 01:11:44 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:11:44.410920 | orchestrator | 2025-05-23 01:11:44 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:11:44.411216 | orchestrator | 2025-05-23 01:11:44 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:11:47.485287 | orchestrator | 2025-05-23 01:11:47 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:11:47.486932 | orchestrator | 2025-05-23 01:11:47 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:11:47.487245 | orchestrator | 2025-05-23 01:11:47 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:11:50.549238 | orchestrator | 2025-05-23 01:11:50 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:11:50.550281 | orchestrator | 2025-05-23 01:11:50 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:11:50.550318 | orchestrator | 2025-05-23 01:11:50 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:11:53.604454 | orchestrator | 2025-05-23 01:11:53 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:11:53.604554 | orchestrator | 2025-05-23 01:11:53 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:11:53.604700 | orchestrator | 2025-05-23 01:11:53 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:11:56.653981 | orchestrator | 2025-05-23 01:11:56 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:11:56.655374 | orchestrator | 2025-05-23 01:11:56 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:11:56.655764 | orchestrator | 2025-05-23 01:11:56 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:11:59.703193 | orchestrator | 2025-05-23 01:11:59 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:11:59.705245 | orchestrator | 2025-05-23 01:11:59 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:11:59.705280 | orchestrator | 2025-05-23 01:11:59 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:12:02.752707 | orchestrator | 2025-05-23 01:12:02 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:12:02.754342 | orchestrator | 2025-05-23 01:12:02 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:12:02.754374 | orchestrator | 2025-05-23 01:12:02 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:12:05.800841 | orchestrator | 2025-05-23 01:12:05 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:12:05.801909 | orchestrator | 2025-05-23 01:12:05 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:12:05.801975 | orchestrator | 2025-05-23 01:12:05 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:12:08.859970 | orchestrator | 2025-05-23 01:12:08 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:12:08.860163 | orchestrator | 2025-05-23 01:12:08 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:12:08.860183 | orchestrator | 2025-05-23 01:12:08 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:12:11.913952 | orchestrator | 2025-05-23 01:12:11 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:12:11.915332 | orchestrator | 2025-05-23 01:12:11 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:12:11.915383 | orchestrator | 2025-05-23 01:12:11 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:12:14.973124 | orchestrator | 2025-05-23 01:12:14 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:12:14.973840 | orchestrator | 2025-05-23 01:12:14 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:12:14.973875 | orchestrator | 2025-05-23 01:12:14 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:12:18.033021 | orchestrator | 2025-05-23 01:12:18 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:12:18.034222 | orchestrator | 2025-05-23 01:12:18 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:12:18.034265 | orchestrator | 2025-05-23 01:12:18 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:12:21.077504 | orchestrator | 2025-05-23 01:12:21 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:12:21.077604 | orchestrator | 2025-05-23 01:12:21 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:12:21.077688 | orchestrator | 2025-05-23 01:12:21 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:12:24.119154 | orchestrator | 2025-05-23 01:12:24 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:12:24.120677 | orchestrator | 2025-05-23 01:12:24 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:12:24.120724 | orchestrator | 2025-05-23 01:12:24 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:12:27.172469 | orchestrator | 2025-05-23 01:12:27 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:12:27.172848 | orchestrator | 2025-05-23 01:12:27 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:12:27.172879 | orchestrator | 2025-05-23 01:12:27 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:12:30.217970 | orchestrator | 2025-05-23 01:12:30 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:12:30.220125 | orchestrator | 2025-05-23 01:12:30 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:12:30.220242 | orchestrator | 2025-05-23 01:12:30 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:12:33.268710 | orchestrator | 2025-05-23 01:12:33 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:12:33.270777 | orchestrator | 2025-05-23 01:12:33 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:12:33.270816 | orchestrator | 2025-05-23 01:12:33 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:12:36.319648 | orchestrator | 2025-05-23 01:12:36 | INFO  | Task fdd95186-6c48-4d93-8b94-0ad09ce49cad is in state STARTED 2025-05-23 01:12:36.325721 | orchestrator | 2025-05-23 01:12:36 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:12:36.327296 | orchestrator | 2025-05-23 01:12:36 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:12:36.327342 | orchestrator | 2025-05-23 01:12:36 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:12:39.386067 | orchestrator | 2025-05-23 01:12:39 | INFO  | Task fdd95186-6c48-4d93-8b94-0ad09ce49cad is in state STARTED 2025-05-23 01:12:39.387412 | orchestrator | 2025-05-23 01:12:39 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:12:39.395706 | orchestrator | 2025-05-23 01:12:39 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:12:39.395856 | orchestrator | 2025-05-23 01:12:39 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:12:42.448302 | orchestrator | 2025-05-23 01:12:42 | INFO  | Task fdd95186-6c48-4d93-8b94-0ad09ce49cad is in state STARTED 2025-05-23 01:12:42.448799 | orchestrator | 2025-05-23 01:12:42 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:12:42.449586 | orchestrator | 2025-05-23 01:12:42 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:12:42.452194 | orchestrator | 2025-05-23 01:12:42 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:12:45.516261 | orchestrator | 2025-05-23 01:12:45 | INFO  | Task fdd95186-6c48-4d93-8b94-0ad09ce49cad is in state STARTED 2025-05-23 01:12:45.517295 | orchestrator | 2025-05-23 01:12:45 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:12:45.518290 | orchestrator | 2025-05-23 01:12:45 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:12:45.518333 | orchestrator | 2025-05-23 01:12:45 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:12:48.541028 | orchestrator | 2025-05-23 01:12:48 | INFO  | Task fdd95186-6c48-4d93-8b94-0ad09ce49cad is in state SUCCESS 2025-05-23 01:12:48.542419 | orchestrator | 2025-05-23 01:12:48 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:12:48.543000 | orchestrator | 2025-05-23 01:12:48 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:12:48.543027 | orchestrator | 2025-05-23 01:12:48 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:12:51.567033 | orchestrator | 2025-05-23 01:12:51 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:12:51.567145 | orchestrator | 2025-05-23 01:12:51 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:12:51.567161 | orchestrator | 2025-05-23 01:12:51 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:12:54.604873 | orchestrator | 2025-05-23 01:12:54 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:12:54.604967 | orchestrator | 2025-05-23 01:12:54 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:12:54.604982 | orchestrator | 2025-05-23 01:12:54 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:12:57.649734 | orchestrator | 2025-05-23 01:12:57 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:12:57.651964 | orchestrator | 2025-05-23 01:12:57 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:12:57.652370 | orchestrator | 2025-05-23 01:12:57 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:13:00.703227 | orchestrator | 2025-05-23 01:13:00 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:13:00.704751 | orchestrator | 2025-05-23 01:13:00 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:13:00.704788 | orchestrator | 2025-05-23 01:13:00 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:13:03.756234 | orchestrator | 2025-05-23 01:13:03 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:13:03.758205 | orchestrator | 2025-05-23 01:13:03 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:13:03.758259 | orchestrator | 2025-05-23 01:13:03 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:13:06.806350 | orchestrator | 2025-05-23 01:13:06 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:13:06.807531 | orchestrator | 2025-05-23 01:13:06 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:13:06.807567 | orchestrator | 2025-05-23 01:13:06 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:13:09.863919 | orchestrator | 2025-05-23 01:13:09 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:13:09.865620 | orchestrator | 2025-05-23 01:13:09 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:13:09.865673 | orchestrator | 2025-05-23 01:13:09 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:13:12.918083 | orchestrator | 2025-05-23 01:13:12 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:13:12.919236 | orchestrator | 2025-05-23 01:13:12 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:13:12.919267 | orchestrator | 2025-05-23 01:13:12 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:13:15.972374 | orchestrator | 2025-05-23 01:13:15 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:13:15.973439 | orchestrator | 2025-05-23 01:13:15 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:13:15.973479 | orchestrator | 2025-05-23 01:13:15 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:13:19.026685 | orchestrator | 2025-05-23 01:13:19 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:13:19.030135 | orchestrator | 2025-05-23 01:13:19 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:13:19.030177 | orchestrator | 2025-05-23 01:13:19 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:13:22.084016 | orchestrator | 2025-05-23 01:13:22 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:13:22.085163 | orchestrator | 2025-05-23 01:13:22 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:13:22.086124 | orchestrator | 2025-05-23 01:13:22 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:13:25.125493 | orchestrator | 2025-05-23 01:13:25 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:13:25.125676 | orchestrator | 2025-05-23 01:13:25 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state STARTED 2025-05-23 01:13:25.125704 | orchestrator | 2025-05-23 01:13:25 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:13:28.180999 | orchestrator | 2025-05-23 01:13:28 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:13:28.185890 | orchestrator | 2025-05-23 01:13:28 | INFO  | Task 349f58af-552e-4c2d-84aa-6532d153bd1c is in state SUCCESS 2025-05-23 01:13:28.188418 | orchestrator | 2025-05-23 01:13:28.188478 | orchestrator | None 2025-05-23 01:13:28.188500 | orchestrator | 2025-05-23 01:13:28.188519 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-23 01:13:28.188538 | orchestrator | 2025-05-23 01:13:28.188556 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-05-23 01:13:28.188575 | orchestrator | Friday 23 May 2025 01:05:07 +0000 (0:00:00.332) 0:00:00.332 ************ 2025-05-23 01:13:28.188748 | orchestrator | changed: [testbed-manager] 2025-05-23 01:13:28.188779 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:13:28.188799 | orchestrator | changed: [testbed-node-1] 2025-05-23 01:13:28.189015 | orchestrator | changed: [testbed-node-2] 2025-05-23 01:13:28.189033 | orchestrator | changed: [testbed-node-3] 2025-05-23 01:13:28.189053 | orchestrator | changed: [testbed-node-4] 2025-05-23 01:13:28.189074 | orchestrator | changed: [testbed-node-5] 2025-05-23 01:13:28.189100 | orchestrator | 2025-05-23 01:13:28.189126 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-23 01:13:28.189161 | orchestrator | Friday 23 May 2025 01:05:09 +0000 (0:00:01.411) 0:00:01.744 ************ 2025-05-23 01:13:28.189196 | orchestrator | changed: [testbed-manager] 2025-05-23 01:13:28.189230 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:13:28.189252 | orchestrator | changed: [testbed-node-1] 2025-05-23 01:13:28.189273 | orchestrator | changed: [testbed-node-2] 2025-05-23 01:13:28.189293 | orchestrator | changed: [testbed-node-3] 2025-05-23 01:13:28.189313 | orchestrator | changed: [testbed-node-4] 2025-05-23 01:13:28.189331 | orchestrator | changed: [testbed-node-5] 2025-05-23 01:13:28.189350 | orchestrator | 2025-05-23 01:13:28.189464 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-23 01:13:28.189484 | orchestrator | Friday 23 May 2025 01:05:10 +0000 (0:00:01.150) 0:00:02.895 ************ 2025-05-23 01:13:28.189505 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-05-23 01:13:28.189543 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-05-23 01:13:28.189572 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-05-23 01:13:28.189619 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-05-23 01:13:28.189654 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-05-23 01:13:28.189694 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-05-23 01:13:28.189726 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-05-23 01:13:28.189746 | orchestrator | 2025-05-23 01:13:28.189806 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-05-23 01:13:28.189829 | orchestrator | 2025-05-23 01:13:28.189849 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-05-23 01:13:28.189867 | orchestrator | Friday 23 May 2025 01:05:11 +0000 (0:00:01.469) 0:00:04.365 ************ 2025-05-23 01:13:28.189923 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 01:13:28.189943 | orchestrator | 2025-05-23 01:13:28.189962 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-05-23 01:13:28.189982 | orchestrator | Friday 23 May 2025 01:05:12 +0000 (0:00:00.576) 0:00:04.942 ************ 2025-05-23 01:13:28.190002 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-05-23 01:13:28.190087 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-05-23 01:13:28.190109 | orchestrator | 2025-05-23 01:13:28.190129 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-05-23 01:13:28.190148 | orchestrator | Friday 23 May 2025 01:05:16 +0000 (0:00:04.180) 0:00:09.123 ************ 2025-05-23 01:13:28.190167 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-23 01:13:28.190184 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-23 01:13:28.190195 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:13:28.190206 | orchestrator | 2025-05-23 01:13:28.190217 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-05-23 01:13:28.190228 | orchestrator | Friday 23 May 2025 01:05:20 +0000 (0:00:04.390) 0:00:13.513 ************ 2025-05-23 01:13:28.190238 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:13:28.190249 | orchestrator | 2025-05-23 01:13:28.190259 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-05-23 01:13:28.190270 | orchestrator | Friday 23 May 2025 01:05:21 +0000 (0:00:00.706) 0:00:14.220 ************ 2025-05-23 01:13:28.190280 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:13:28.190291 | orchestrator | 2025-05-23 01:13:28.190301 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-05-23 01:13:28.190312 | orchestrator | Friday 23 May 2025 01:05:23 +0000 (0:00:01.698) 0:00:15.918 ************ 2025-05-23 01:13:28.190322 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:13:28.190333 | orchestrator | 2025-05-23 01:13:28.190343 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-23 01:13:28.190354 | orchestrator | Friday 23 May 2025 01:05:27 +0000 (0:00:04.097) 0:00:20.015 ************ 2025-05-23 01:13:28.190380 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:13:28.190391 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:13:28.190401 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:13:28.190412 | orchestrator | 2025-05-23 01:13:28.190423 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-05-23 01:13:28.190433 | orchestrator | Friday 23 May 2025 01:05:27 +0000 (0:00:00.482) 0:00:20.498 ************ 2025-05-23 01:13:28.190444 | orchestrator | ok: [testbed-node-0] 2025-05-23 01:13:28.190454 | orchestrator | 2025-05-23 01:13:28.190465 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-05-23 01:13:28.190475 | orchestrator | Friday 23 May 2025 01:05:55 +0000 (0:00:27.471) 0:00:47.969 ************ 2025-05-23 01:13:28.190485 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:13:28.190496 | orchestrator | 2025-05-23 01:13:28.190506 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-23 01:13:28.190517 | orchestrator | Friday 23 May 2025 01:06:09 +0000 (0:00:13.587) 0:01:01.557 ************ 2025-05-23 01:13:28.190528 | orchestrator | ok: [testbed-node-0] 2025-05-23 01:13:28.190538 | orchestrator | 2025-05-23 01:13:28.190549 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-23 01:13:28.190559 | orchestrator | Friday 23 May 2025 01:06:19 +0000 (0:00:10.061) 0:01:11.619 ************ 2025-05-23 01:13:28.190613 | orchestrator | ok: [testbed-node-0] 2025-05-23 01:13:28.190635 | orchestrator | 2025-05-23 01:13:28.190654 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-05-23 01:13:28.190671 | orchestrator | Friday 23 May 2025 01:06:19 +0000 (0:00:00.888) 0:01:12.507 ************ 2025-05-23 01:13:28.190688 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:13:28.190699 | orchestrator | 2025-05-23 01:13:28.190713 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-23 01:13:28.190747 | orchestrator | Friday 23 May 2025 01:06:20 +0000 (0:00:00.624) 0:01:13.132 ************ 2025-05-23 01:13:28.190766 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 01:13:28.190786 | orchestrator | 2025-05-23 01:13:28.190804 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-05-23 01:13:28.190821 | orchestrator | Friday 23 May 2025 01:06:21 +0000 (0:00:00.766) 0:01:13.899 ************ 2025-05-23 01:13:28.190832 | orchestrator | ok: [testbed-node-0] 2025-05-23 01:13:28.190842 | orchestrator | 2025-05-23 01:13:28.190852 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-05-23 01:13:28.190863 | orchestrator | Friday 23 May 2025 01:06:36 +0000 (0:00:15.595) 0:01:29.495 ************ 2025-05-23 01:13:28.190874 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:13:28.190885 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:13:28.190895 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:13:28.190906 | orchestrator | 2025-05-23 01:13:28.190916 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-05-23 01:13:28.190927 | orchestrator | 2025-05-23 01:13:28.190938 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-05-23 01:13:28.190948 | orchestrator | Friday 23 May 2025 01:06:37 +0000 (0:00:00.303) 0:01:29.798 ************ 2025-05-23 01:13:28.190960 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 01:13:28.190978 | orchestrator | 2025-05-23 01:13:28.191049 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-05-23 01:13:28.191069 | orchestrator | Friday 23 May 2025 01:06:38 +0000 (0:00:00.802) 0:01:30.601 ************ 2025-05-23 01:13:28.191143 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:13:28.191177 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:13:28.191190 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:13:28.191201 | orchestrator | 2025-05-23 01:13:28.191211 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-05-23 01:13:28.191222 | orchestrator | Friday 23 May 2025 01:06:40 +0000 (0:00:02.275) 0:01:32.877 ************ 2025-05-23 01:13:28.191232 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:13:28.191300 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:13:28.191320 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:13:28.191347 | orchestrator | 2025-05-23 01:13:28.191367 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-05-23 01:13:28.191402 | orchestrator | Friday 23 May 2025 01:06:42 +0000 (0:00:02.245) 0:01:35.122 ************ 2025-05-23 01:13:28.191420 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:13:28.191436 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:13:28.191454 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:13:28.191471 | orchestrator | 2025-05-23 01:13:28.191490 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-05-23 01:13:28.191509 | orchestrator | Friday 23 May 2025 01:06:43 +0000 (0:00:00.790) 0:01:35.913 ************ 2025-05-23 01:13:28.191526 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-23 01:13:28.191541 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:13:28.191552 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-23 01:13:28.191563 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:13:28.191573 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-05-23 01:13:28.191584 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-05-23 01:13:28.191639 | orchestrator | 2025-05-23 01:13:28.191650 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-05-23 01:13:28.191661 | orchestrator | Friday 23 May 2025 01:06:52 +0000 (0:00:08.650) 0:01:44.564 ************ 2025-05-23 01:13:28.191672 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:13:28.191682 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:13:28.191693 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:13:28.191716 | orchestrator | 2025-05-23 01:13:28.191727 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-05-23 01:13:28.191738 | orchestrator | Friday 23 May 2025 01:06:52 +0000 (0:00:00.414) 0:01:44.979 ************ 2025-05-23 01:13:28.191751 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-23 01:13:28.191763 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:13:28.191776 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-23 01:13:28.191788 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:13:28.191808 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-23 01:13:28.191820 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:13:28.191832 | orchestrator | 2025-05-23 01:13:28.191844 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-05-23 01:13:28.191856 | orchestrator | Friday 23 May 2025 01:06:53 +0000 (0:00:01.271) 0:01:46.250 ************ 2025-05-23 01:13:28.191868 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:13:28.191880 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:13:28.191892 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:13:28.191904 | orchestrator | 2025-05-23 01:13:28.191916 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-05-23 01:13:28.191928 | orchestrator | Friday 23 May 2025 01:06:54 +0000 (0:00:00.509) 0:01:46.760 ************ 2025-05-23 01:13:28.191940 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:13:28.191952 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:13:28.191964 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:13:28.191976 | orchestrator | 2025-05-23 01:13:28.191987 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-05-23 01:13:28.191999 | orchestrator | Friday 23 May 2025 01:06:55 +0000 (0:00:01.199) 0:01:47.960 ************ 2025-05-23 01:13:28.192012 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:13:28.192036 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:13:28.192048 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:13:28.192060 | orchestrator | 2025-05-23 01:13:28.192072 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-05-23 01:13:28.192085 | orchestrator | Friday 23 May 2025 01:06:58 +0000 (0:00:02.826) 0:01:50.786 ************ 2025-05-23 01:13:28.192097 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:13:28.192107 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:13:28.192118 | orchestrator | ok: [testbed-node-0] 2025-05-23 01:13:28.192129 | orchestrator | 2025-05-23 01:13:28.192140 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-23 01:13:28.192151 | orchestrator | Friday 23 May 2025 01:07:18 +0000 (0:00:19.802) 0:02:10.589 ************ 2025-05-23 01:13:28.192161 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:13:28.192172 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:13:28.192182 | orchestrator | ok: [testbed-node-0] 2025-05-23 01:13:28.192193 | orchestrator | 2025-05-23 01:13:28.192207 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-23 01:13:28.192230 | orchestrator | Friday 23 May 2025 01:07:29 +0000 (0:00:11.332) 0:02:21.921 ************ 2025-05-23 01:13:28.192258 | orchestrator | ok: [testbed-node-0] 2025-05-23 01:13:28.192277 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:13:28.192296 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:13:28.192314 | orchestrator | 2025-05-23 01:13:28.192332 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-05-23 01:13:28.192351 | orchestrator | Friday 23 May 2025 01:07:30 +0000 (0:00:01.417) 0:02:23.339 ************ 2025-05-23 01:13:28.192370 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:13:28.192389 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:13:28.192409 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:13:28.192427 | orchestrator | 2025-05-23 01:13:28.192448 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-05-23 01:13:28.192469 | orchestrator | Friday 23 May 2025 01:07:41 +0000 (0:00:10.451) 0:02:33.790 ************ 2025-05-23 01:13:28.192487 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:13:28.192508 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:13:28.192519 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:13:28.192529 | orchestrator | 2025-05-23 01:13:28.192540 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-05-23 01:13:28.192551 | orchestrator | Friday 23 May 2025 01:07:42 +0000 (0:00:01.587) 0:02:35.378 ************ 2025-05-23 01:13:28.192561 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:13:28.192571 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:13:28.192582 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:13:28.192623 | orchestrator | 2025-05-23 01:13:28.192634 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-05-23 01:13:28.192645 | orchestrator | 2025-05-23 01:13:28.192655 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-23 01:13:28.192666 | orchestrator | Friday 23 May 2025 01:07:43 +0000 (0:00:00.523) 0:02:35.902 ************ 2025-05-23 01:13:28.192676 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 01:13:28.192688 | orchestrator | 2025-05-23 01:13:28.192699 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-05-23 01:13:28.192709 | orchestrator | Friday 23 May 2025 01:07:44 +0000 (0:00:00.651) 0:02:36.553 ************ 2025-05-23 01:13:28.192720 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-05-23 01:13:28.192730 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-05-23 01:13:28.192741 | orchestrator | 2025-05-23 01:13:28.192751 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-05-23 01:13:28.192762 | orchestrator | Friday 23 May 2025 01:07:47 +0000 (0:00:03.328) 0:02:39.882 ************ 2025-05-23 01:13:28.192773 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-05-23 01:13:28.192785 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-05-23 01:13:28.192796 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-05-23 01:13:28.192807 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-05-23 01:13:28.192817 | orchestrator | 2025-05-23 01:13:28.192828 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-05-23 01:13:28.192839 | orchestrator | Friday 23 May 2025 01:07:54 +0000 (0:00:06.677) 0:02:46.560 ************ 2025-05-23 01:13:28.192849 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-23 01:13:28.192860 | orchestrator | 2025-05-23 01:13:28.192877 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-05-23 01:13:28.192888 | orchestrator | Friday 23 May 2025 01:07:57 +0000 (0:00:03.164) 0:02:49.724 ************ 2025-05-23 01:13:28.192899 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-23 01:13:28.192909 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-05-23 01:13:28.192920 | orchestrator | 2025-05-23 01:13:28.192930 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-05-23 01:13:28.192941 | orchestrator | Friday 23 May 2025 01:08:00 +0000 (0:00:03.671) 0:02:53.395 ************ 2025-05-23 01:13:28.192952 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-23 01:13:28.192962 | orchestrator | 2025-05-23 01:13:28.192973 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-05-23 01:13:28.192983 | orchestrator | Friday 23 May 2025 01:08:04 +0000 (0:00:03.349) 0:02:56.744 ************ 2025-05-23 01:13:28.192994 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-05-23 01:13:28.193005 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-05-23 01:13:28.193015 | orchestrator | 2025-05-23 01:13:28.193026 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-05-23 01:13:28.193054 | orchestrator | Friday 23 May 2025 01:08:12 +0000 (0:00:08.130) 0:03:04.874 ************ 2025-05-23 01:13:28.193072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-23 01:13:28.193089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-23 01:13:28.193103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-23 01:13:28.193120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.193142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-23 01:13:28.193162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-23 01:13:28.193173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.193184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-23 01:13:28.193200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.193211 | orchestrator | 2025-05-23 01:13:28.193223 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-05-23 01:13:28.193234 | orchestrator | Friday 23 May 2025 01:08:13 +0000 (0:00:01.604) 0:03:06.478 ************ 2025-05-23 01:13:28.193245 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:13:28.193255 | orchestrator | 2025-05-23 01:13:28.193266 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-05-23 01:13:28.193284 | orchestrator | Friday 23 May 2025 01:08:14 +0000 (0:00:00.345) 0:03:06.824 ************ 2025-05-23 01:13:28.193295 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:13:28.193306 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:13:28.193316 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:13:28.193327 | orchestrator | 2025-05-23 01:13:28.193338 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-05-23 01:13:28.193348 | orchestrator | Friday 23 May 2025 01:08:14 +0000 (0:00:00.316) 0:03:07.140 ************ 2025-05-23 01:13:28.193359 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-23 01:13:28.193370 | orchestrator | 2025-05-23 01:13:28.193385 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-05-23 01:13:28.193397 | orchestrator | Friday 23 May 2025 01:08:15 +0000 (0:00:00.580) 0:03:07.721 ************ 2025-05-23 01:13:28.193407 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:13:28.193418 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:13:28.193429 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:13:28.193440 | orchestrator | 2025-05-23 01:13:28.193451 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-23 01:13:28.193461 | orchestrator | Friday 23 May 2025 01:08:15 +0000 (0:00:00.302) 0:03:08.023 ************ 2025-05-23 01:13:28.193472 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 01:13:28.193483 | orchestrator | 2025-05-23 01:13:28.193493 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-05-23 01:13:28.193504 | orchestrator | Friday 23 May 2025 01:08:16 +0000 (0:00:00.815) 0:03:08.839 ************ 2025-05-23 01:13:28.193516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-23 01:13:28.193528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-23 01:13:28.193556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-23 01:13:28.193569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-23 01:13:28.193581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-23 01:13:28.193617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-23 01:13:28.193630 | orchestrator | 2025-05-23 01:13:28.193641 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-05-23 01:13:28.193652 | orchestrator | Friday 23 May 2025 01:08:18 +0000 (0:00:02.666) 0:03:11.505 ************ 2025-05-23 01:13:28.193705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-23 01:13:28.193725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.193743 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:13:28.193755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-23 01:13:28.193768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.193779 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:13:28.193790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-23 01:13:28.193813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.193824 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:13:28.193835 | orchestrator | 2025-05-23 01:13:28.193846 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-05-23 01:13:28.193857 | orchestrator | Friday 23 May 2025 01:08:19 +0000 (0:00:00.620) 0:03:12.125 ************ 2025-05-23 01:13:28.193876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-23 01:13:28.193889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.193900 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:13:28.193911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-23 01:13:28.193934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.193945 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:13:28.193964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-23 01:13:28.193977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.193988 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:13:28.193999 | orchestrator | 2025-05-23 01:13:28.194010 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-05-23 01:13:28.194073 | orchestrator | Friday 23 May 2025 01:08:20 +0000 (0:00:01.312) 0:03:13.437 ************ 2025-05-23 01:13:28.194085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-23 01:13:28.194111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-23 01:13:28.194133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-23 01:13:28.194146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-23 01:13:28.194158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.194176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-23 01:13:28.194192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.194209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-23 01:13:28.194220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.194231 | orchestrator | 2025-05-23 01:13:28.194242 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-05-23 01:13:28.194253 | orchestrator | Friday 23 May 2025 01:08:23 +0000 (0:00:02.628) 0:03:16.066 ************ 2025-05-23 01:13:28.194264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-23 01:13:28.194288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-23 01:13:28.194307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-23 01:13:28.194319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-23 01:13:28.194331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.194348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-23 01:13:28.194360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.194375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-23 01:13:28.194392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.194404 | orchestrator | 2025-05-23 01:13:28.194414 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-05-23 01:13:28.194425 | orchestrator | Friday 23 May 2025 01:08:30 +0000 (0:00:07.024) 0:03:23.091 ************ 2025-05-23 01:13:28.194437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-23 01:13:28.194455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.194467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.194477 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:13:28.194497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-23 01:13:28.194517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.194529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.194540 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:13:28.194557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-23 01:13:28.194569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.194584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.194626 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:13:28.194646 | orchestrator | 2025-05-23 01:13:28.194667 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-05-23 01:13:28.194686 | orchestrator | Friday 23 May 2025 01:08:31 +0000 (0:00:00.814) 0:03:23.905 ************ 2025-05-23 01:13:28.194707 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:13:28.194726 | orchestrator | changed: [testbed-node-1] 2025-05-23 01:13:28.194745 | orchestrator | changed: [testbed-node-2] 2025-05-23 01:13:28.194764 | orchestrator | 2025-05-23 01:13:28.194783 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-05-23 01:13:28.194802 | orchestrator | Friday 23 May 2025 01:08:33 +0000 (0:00:01.664) 0:03:25.570 ************ 2025-05-23 01:13:28.194846 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:13:28.194867 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:13:28.194887 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:13:28.194907 | orchestrator | 2025-05-23 01:13:28.194928 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-05-23 01:13:28.194947 | orchestrator | Friday 23 May 2025 01:08:33 +0000 (0:00:00.455) 0:03:26.026 ************ 2025-05-23 01:13:28.194961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-23 01:13:28.194983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-23 01:13:28.195002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-23 01:13:28.195148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-23 01:13:28.195170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.195182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-23 01:13:28.195193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.195204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-23 01:13:28.195221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.195232 | orchestrator | 2025-05-23 01:13:28.195243 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-23 01:13:28.195254 | orchestrator | Friday 23 May 2025 01:08:35 +0000 (0:00:02.042) 0:03:28.068 ************ 2025-05-23 01:13:28.195265 | orchestrator | 2025-05-23 01:13:28.195275 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-23 01:13:28.195286 | orchestrator | Friday 23 May 2025 01:08:35 +0000 (0:00:00.316) 0:03:28.385 ************ 2025-05-23 01:13:28.195297 | orchestrator | 2025-05-23 01:13:28.195307 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-23 01:13:28.195318 | orchestrator | Friday 23 May 2025 01:08:35 +0000 (0:00:00.107) 0:03:28.492 ************ 2025-05-23 01:13:28.195328 | orchestrator | 2025-05-23 01:13:28.195345 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-05-23 01:13:28.195363 | orchestrator | Friday 23 May 2025 01:08:36 +0000 (0:00:00.267) 0:03:28.760 ************ 2025-05-23 01:13:28.195374 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:13:28.195384 | orchestrator | changed: [testbed-node-1] 2025-05-23 01:13:28.195395 | orchestrator | changed: [testbed-node-2] 2025-05-23 01:13:28.195406 | orchestrator | 2025-05-23 01:13:28.195416 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-05-23 01:13:28.195427 | orchestrator | Friday 23 May 2025 01:08:59 +0000 (0:00:23.272) 0:03:52.033 ************ 2025-05-23 01:13:28.195438 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:13:28.195449 | orchestrator | changed: [testbed-node-2] 2025-05-23 01:13:28.195459 | orchestrator | changed: [testbed-node-1] 2025-05-23 01:13:28.195470 | orchestrator | 2025-05-23 01:13:28.195480 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-05-23 01:13:28.195571 | orchestrator | 2025-05-23 01:13:28.195582 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-23 01:13:28.195648 | orchestrator | Friday 23 May 2025 01:09:05 +0000 (0:00:06.173) 0:03:58.207 ************ 2025-05-23 01:13:28.195660 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 01:13:28.195672 | orchestrator | 2025-05-23 01:13:28.195683 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-23 01:13:28.195693 | orchestrator | Friday 23 May 2025 01:09:07 +0000 (0:00:01.524) 0:03:59.731 ************ 2025-05-23 01:13:28.195704 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:13:28.195714 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:13:28.195725 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:13:28.195736 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:13:28.195746 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:13:28.195757 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:13:28.195767 | orchestrator | 2025-05-23 01:13:28.195782 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-05-23 01:13:28.195802 | orchestrator | Friday 23 May 2025 01:09:07 +0000 (0:00:00.780) 0:04:00.512 ************ 2025-05-23 01:13:28.195819 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:13:28.195837 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:13:28.195856 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:13:28.195867 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 01:13:28.195878 | orchestrator | 2025-05-23 01:13:28.195888 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-23 01:13:28.195899 | orchestrator | Friday 23 May 2025 01:09:09 +0000 (0:00:01.242) 0:04:01.755 ************ 2025-05-23 01:13:28.195909 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-05-23 01:13:28.195920 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-05-23 01:13:28.195931 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-05-23 01:13:28.195941 | orchestrator | 2025-05-23 01:13:28.195952 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-23 01:13:28.195962 | orchestrator | Friday 23 May 2025 01:09:10 +0000 (0:00:00.849) 0:04:02.605 ************ 2025-05-23 01:13:28.195973 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-05-23 01:13:28.195983 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-05-23 01:13:28.195994 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-05-23 01:13:28.196004 | orchestrator | 2025-05-23 01:13:28.196014 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-23 01:13:28.196025 | orchestrator | Friday 23 May 2025 01:09:11 +0000 (0:00:01.406) 0:04:04.012 ************ 2025-05-23 01:13:28.196035 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-05-23 01:13:28.196046 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:13:28.196056 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-05-23 01:13:28.196094 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:13:28.196105 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-05-23 01:13:28.196115 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:13:28.196126 | orchestrator | 2025-05-23 01:13:28.196151 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-05-23 01:13:28.196162 | orchestrator | Friday 23 May 2025 01:09:12 +0000 (0:00:00.660) 0:04:04.672 ************ 2025-05-23 01:13:28.196171 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-23 01:13:28.196181 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-23 01:13:28.196190 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:13:28.196199 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-23 01:13:28.196214 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-23 01:13:28.196234 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-23 01:13:28.196243 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:13:28.196252 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-23 01:13:28.196262 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-23 01:13:28.196271 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-23 01:13:28.196281 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:13:28.196290 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-23 01:13:28.196300 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-23 01:13:28.196309 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-23 01:13:28.196427 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-23 01:13:28.196440 | orchestrator | 2025-05-23 01:13:28.196457 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-05-23 01:13:28.196467 | orchestrator | Friday 23 May 2025 01:09:14 +0000 (0:00:02.347) 0:04:07.020 ************ 2025-05-23 01:13:28.196476 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:13:28.196486 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:13:28.196496 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:13:28.196505 | orchestrator | changed: [testbed-node-3] 2025-05-23 01:13:28.196515 | orchestrator | changed: [testbed-node-4] 2025-05-23 01:13:28.196524 | orchestrator | changed: [testbed-node-5] 2025-05-23 01:13:28.196534 | orchestrator | 2025-05-23 01:13:28.196543 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-05-23 01:13:28.196553 | orchestrator | Friday 23 May 2025 01:09:15 +0000 (0:00:01.168) 0:04:08.188 ************ 2025-05-23 01:13:28.196562 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:13:28.196572 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:13:28.196581 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:13:28.196612 | orchestrator | changed: [testbed-node-4] 2025-05-23 01:13:28.196622 | orchestrator | changed: [testbed-node-3] 2025-05-23 01:13:28.196632 | orchestrator | changed: [testbed-node-5] 2025-05-23 01:13:28.196641 | orchestrator | 2025-05-23 01:13:28.196651 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-05-23 01:13:28.196660 | orchestrator | Friday 23 May 2025 01:09:17 +0000 (0:00:01.852) 0:04:10.041 ************ 2025-05-23 01:13:28.196671 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-23 01:13:28.196691 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-23 01:13:28.196707 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-23 01:13:28.196725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-23 01:13:28.196737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-23 01:13:28.196748 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-23 01:13:28.196764 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.196775 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.196787 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:13:28.196803 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.196820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-23 01:13:28.196831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-23 01:13:28.196847 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-23 01:13:28.196857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-23 01:13:28.196876 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-23 01:13:28.196886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-23 01:13:28.196904 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.196915 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.196930 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.196940 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.196950 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:13:28.196964 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.196975 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:13:28.196991 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.197001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-23 01:13:28.197016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.197026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:13:28.197036 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-23 01:13:28.197051 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.197061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-23 01:13:28.197076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.197092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:13:28.197102 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-23 01:13:28.197112 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.197126 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-23 01:13:28.197137 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.197154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-23 01:13:28.197171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.197181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:13:28.197191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-23 01:13:28.197200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.197214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.197230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-23 01:13:28.197240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.197255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.197266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-23 01:13:28.197276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.197290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.197300 | orchestrator | 2025-05-23 01:13:28.197310 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-23 01:13:28.197319 | orchestrator | Friday 23 May 2025 01:09:20 +0000 (0:00:02.710) 0:04:12.751 ************ 2025-05-23 01:13:28.197329 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-23 01:13:28.197339 | orchestrator | 2025-05-23 01:13:28.197348 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-05-23 01:13:28.197358 | orchestrator | Friday 23 May 2025 01:09:21 +0000 (0:00:01.620) 0:04:14.372 ************ 2025-05-23 01:13:28.197380 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-23 01:13:28.197391 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-23 01:13:28.197401 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-23 01:13:28.197411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-23 01:13:28.197426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-23 01:13:28.197447 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-23 01:13:28.197458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-23 01:13:28.197468 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-23 01:13:28.197478 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-23 01:13:28.197487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-23 01:13:28.197497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-23 01:13:28.197513 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-23 01:13:28.197537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-23 01:13:28.197548 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-23 01:13:28.197558 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-23 01:13:28.197568 | orchestrator | 2025-05-23 01:13:28.197578 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-05-23 01:13:28.197602 | orchestrator | Friday 23 May 2025 01:09:25 +0000 (0:00:03.885) 0:04:18.257 ************ 2025-05-23 01:13:28.197613 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-23 01:13:28.197627 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-23 01:13:28.197648 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.197659 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-23 01:13:28.197669 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-23 01:13:28.197679 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.197689 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:13:28.197698 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:13:28.197713 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-23 01:13:28.197734 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-23 01:13:28.197744 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.197754 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:13:28.197764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.197774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.197784 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:13:28.197794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.197814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.197824 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:13:28.198367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.198404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.198421 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:13:28.198436 | orchestrator | 2025-05-23 01:13:28.198452 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-05-23 01:13:28.198469 | orchestrator | Friday 23 May 2025 01:09:27 +0000 (0:00:02.077) 0:04:20.334 ************ 2025-05-23 01:13:28.198487 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-23 01:13:28.198505 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-23 01:13:28.198532 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.198562 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:13:28.198702 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-23 01:13:28.198730 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-23 01:13:28.198748 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.198806 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:13:28.198819 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-23 01:13:28.198901 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-23 01:13:28.198920 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.198930 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:13:28.198951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.198961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.198982 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:13:28.198991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.198999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.199016 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:13:28.199024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.199036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.199045 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:13:28.199052 | orchestrator | 2025-05-23 01:13:28.199060 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-23 01:13:28.199069 | orchestrator | Friday 23 May 2025 01:09:30 +0000 (0:00:02.827) 0:04:23.161 ************ 2025-05-23 01:13:28.199076 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:13:28.199084 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:13:28.199092 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:13:28.199100 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-23 01:13:28.199107 | orchestrator | 2025-05-23 01:13:28.199115 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-05-23 01:13:28.199123 | orchestrator | Friday 23 May 2025 01:09:31 +0000 (0:00:01.285) 0:04:24.446 ************ 2025-05-23 01:13:28.199137 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-23 01:13:28.199145 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-23 01:13:28.199153 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-23 01:13:28.199161 | orchestrator | 2025-05-23 01:13:28.199168 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-05-23 01:13:28.199176 | orchestrator | Friday 23 May 2025 01:09:32 +0000 (0:00:00.877) 0:04:25.324 ************ 2025-05-23 01:13:28.199184 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-23 01:13:28.199191 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-23 01:13:28.199199 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-23 01:13:28.199207 | orchestrator | 2025-05-23 01:13:28.199215 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-05-23 01:13:28.199229 | orchestrator | Friday 23 May 2025 01:09:33 +0000 (0:00:00.821) 0:04:26.146 ************ 2025-05-23 01:13:28.199242 | orchestrator | ok: [testbed-node-3] 2025-05-23 01:13:28.199255 | orchestrator | ok: [testbed-node-4] 2025-05-23 01:13:28.199268 | orchestrator | ok: [testbed-node-5] 2025-05-23 01:13:28.199284 | orchestrator | 2025-05-23 01:13:28.199297 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-05-23 01:13:28.199309 | orchestrator | Friday 23 May 2025 01:09:34 +0000 (0:00:00.775) 0:04:26.921 ************ 2025-05-23 01:13:28.199318 | orchestrator | ok: [testbed-node-3] 2025-05-23 01:13:28.199327 | orchestrator | ok: [testbed-node-4] 2025-05-23 01:13:28.199335 | orchestrator | ok: [testbed-node-5] 2025-05-23 01:13:28.199344 | orchestrator | 2025-05-23 01:13:28.199353 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-05-23 01:13:28.199362 | orchestrator | Friday 23 May 2025 01:09:34 +0000 (0:00:00.554) 0:04:27.476 ************ 2025-05-23 01:13:28.199377 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-23 01:13:28.199386 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-23 01:13:28.199395 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-23 01:13:28.199407 | orchestrator | 2025-05-23 01:13:28.199421 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-05-23 01:13:28.199434 | orchestrator | Friday 23 May 2025 01:09:36 +0000 (0:00:01.400) 0:04:28.876 ************ 2025-05-23 01:13:28.199447 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-23 01:13:28.199460 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-23 01:13:28.199473 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-23 01:13:28.199536 | orchestrator | 2025-05-23 01:13:28.199551 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-05-23 01:13:28.199561 | orchestrator | Friday 23 May 2025 01:09:37 +0000 (0:00:01.377) 0:04:30.254 ************ 2025-05-23 01:13:28.199570 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-23 01:13:28.199579 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-23 01:13:28.199604 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-23 01:13:28.199613 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-05-23 01:13:28.199623 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-05-23 01:13:28.199632 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-05-23 01:13:28.199640 | orchestrator | 2025-05-23 01:13:28.199649 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-05-23 01:13:28.199658 | orchestrator | Friday 23 May 2025 01:09:43 +0000 (0:00:05.283) 0:04:35.538 ************ 2025-05-23 01:13:28.199667 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:13:28.199675 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:13:28.199683 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:13:28.199690 | orchestrator | 2025-05-23 01:13:28.199698 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-05-23 01:13:28.199706 | orchestrator | Friday 23 May 2025 01:09:43 +0000 (0:00:00.470) 0:04:36.009 ************ 2025-05-23 01:13:28.199714 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:13:28.199721 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:13:28.199729 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:13:28.199737 | orchestrator | 2025-05-23 01:13:28.199745 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-05-23 01:13:28.199752 | orchestrator | Friday 23 May 2025 01:09:43 +0000 (0:00:00.472) 0:04:36.481 ************ 2025-05-23 01:13:28.199760 | orchestrator | changed: [testbed-node-3] 2025-05-23 01:13:28.199768 | orchestrator | changed: [testbed-node-4] 2025-05-23 01:13:28.199775 | orchestrator | changed: [testbed-node-5] 2025-05-23 01:13:28.199783 | orchestrator | 2025-05-23 01:13:28.199791 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-05-23 01:13:28.199798 | orchestrator | Friday 23 May 2025 01:09:45 +0000 (0:00:01.543) 0:04:38.025 ************ 2025-05-23 01:13:28.199812 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-23 01:13:28.199821 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-23 01:13:28.199829 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-23 01:13:28.199837 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-23 01:13:28.199845 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-23 01:13:28.199853 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-23 01:13:28.199871 | orchestrator | 2025-05-23 01:13:28.199879 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-05-23 01:13:28.199893 | orchestrator | Friday 23 May 2025 01:09:49 +0000 (0:00:03.514) 0:04:41.539 ************ 2025-05-23 01:13:28.199902 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-23 01:13:28.199909 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-23 01:13:28.199917 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-23 01:13:28.199925 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-23 01:13:28.199933 | orchestrator | changed: [testbed-node-3] 2025-05-23 01:13:28.199940 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-23 01:13:28.199948 | orchestrator | changed: [testbed-node-4] 2025-05-23 01:13:28.199956 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-23 01:13:28.199964 | orchestrator | changed: [testbed-node-5] 2025-05-23 01:13:28.199971 | orchestrator | 2025-05-23 01:13:28.199979 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-05-23 01:13:28.199987 | orchestrator | Friday 23 May 2025 01:09:52 +0000 (0:00:03.471) 0:04:45.011 ************ 2025-05-23 01:13:28.199994 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:13:28.200002 | orchestrator | 2025-05-23 01:13:28.200010 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-05-23 01:13:28.200018 | orchestrator | Friday 23 May 2025 01:09:52 +0000 (0:00:00.134) 0:04:45.145 ************ 2025-05-23 01:13:28.200026 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:13:28.200033 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:13:28.200041 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:13:28.200049 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:13:28.200056 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:13:28.200064 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:13:28.200072 | orchestrator | 2025-05-23 01:13:28.200079 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-05-23 01:13:28.200087 | orchestrator | Friday 23 May 2025 01:09:53 +0000 (0:00:00.957) 0:04:46.102 ************ 2025-05-23 01:13:28.200095 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-23 01:13:28.200103 | orchestrator | 2025-05-23 01:13:28.200110 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-05-23 01:13:28.200118 | orchestrator | Friday 23 May 2025 01:09:53 +0000 (0:00:00.390) 0:04:46.493 ************ 2025-05-23 01:13:28.200126 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:13:28.200134 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:13:28.200141 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:13:28.200149 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:13:28.200156 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:13:28.200164 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:13:28.200172 | orchestrator | 2025-05-23 01:13:28.200179 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-05-23 01:13:28.200187 | orchestrator | Friday 23 May 2025 01:09:54 +0000 (0:00:00.866) 0:04:47.359 ************ 2025-05-23 01:13:28.200196 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-23 01:13:28.200215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-23 01:13:28.200228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-23 01:13:28.200237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-23 01:13:28.200246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-23 01:13:28.200254 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-23 01:13:28.200266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-23 01:13:28.200279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-23 01:13:28.200292 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-23 01:13:28.200301 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-23 01:13:28.200309 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.200317 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.200331 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:13:28.200343 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.200356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-23 01:13:28.200364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.200372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:13:28.200381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-23 01:13:28.200389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.200402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:13:28.200414 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-23 01:13:28.200426 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.200434 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.200443 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:13:28.200451 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.200459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-23 01:13:28.200472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.200483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:13:28.200492 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-23 01:13:28.200504 | orchestrator | skipping: [testbed-node-5] => (item={2025-05-23 01:13:28 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:13:28.200513 | orchestrator | 'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.200521 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.200529 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:13:28.200545 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.200554 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-23 01:13:28.200566 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.200580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-23 01:13:28.200601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.200610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.200623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-23 01:13:28.200631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.200643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.200657 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-23 01:13:28.200665 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.200674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-23 01:13:28.200689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.200698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.200710 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-23 01:13:28.200724 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.200732 | orchestrator | 2025-05-23 01:13:28.200740 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-05-23 01:13:28.200748 | orchestrator | Friday 23 May 2025 01:09:59 +0000 (0:00:04.403) 0:04:51.763 ************ 2025-05-23 01:13:28.200756 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-23 01:13:28.200769 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-23 01:13:28.200777 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.200789 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.200797 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:13:28.200810 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.200819 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-23 01:13:28.200832 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-23 01:13:28.200840 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.200848 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.200860 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:13:28.200869 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.200882 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-23 01:13:28.200895 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-23 01:13:28.200903 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.200911 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.200920 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:13:28.200931 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.201367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-23 01:13:28.201397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-23 01:13:28.201424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-23 01:13:28.201433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-23 01:13:28.201451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-23 01:13:28.201466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-23 01:13:28.201513 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-23 01:13:28.201538 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.201552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-23 01:13:28.201565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.201579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:13:28.201657 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-23 01:13:28.201704 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.201728 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-23 01:13:28.201740 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.201754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-23 01:13:28.201767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.201785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:13:28.201825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-23 01:13:28.201847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.201861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:13:28.201874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-23 01:13:28.201886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.201908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.201922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-23 01:13:28.201963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.201988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.202003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-23 01:13:28.202048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.202066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.202080 | orchestrator | 2025-05-23 01:13:28.202098 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-05-23 01:13:28.202112 | orchestrator | Friday 23 May 2025 01:10:07 +0000 (0:00:08.058) 0:04:59.822 ************ 2025-05-23 01:13:28.202125 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:13:28.202138 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:13:28.202152 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:13:28.202164 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:13:28.202177 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:13:28.202190 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:13:28.202212 | orchestrator | 2025-05-23 01:13:28.202224 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-05-23 01:13:28.202235 | orchestrator | Friday 23 May 2025 01:10:09 +0000 (0:00:01.981) 0:05:01.803 ************ 2025-05-23 01:13:28.202245 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-23 01:13:28.202257 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-23 01:13:28.202269 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-23 01:13:28.202279 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-23 01:13:28.202290 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:13:28.202330 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-23 01:13:28.202344 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-23 01:13:28.202355 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-23 01:13:28.202366 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-23 01:13:28.202377 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:13:28.202387 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-23 01:13:28.202398 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:13:28.202408 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-23 01:13:28.202418 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-23 01:13:28.202429 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-23 01:13:28.202438 | orchestrator | 2025-05-23 01:13:28.202448 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-05-23 01:13:28.202459 | orchestrator | Friday 23 May 2025 01:10:14 +0000 (0:00:05.676) 0:05:07.479 ************ 2025-05-23 01:13:28.202471 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:13:28.202482 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:13:28.202494 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:13:28.202505 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:13:28.202517 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:13:28.202527 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:13:28.202538 | orchestrator | 2025-05-23 01:13:28.202549 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-05-23 01:13:28.202559 | orchestrator | Friday 23 May 2025 01:10:15 +0000 (0:00:00.980) 0:05:08.460 ************ 2025-05-23 01:13:28.202571 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-23 01:13:28.202582 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-23 01:13:28.202613 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-23 01:13:28.202624 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-23 01:13:28.202635 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-23 01:13:28.202645 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-23 01:13:28.202656 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-23 01:13:28.202667 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-23 01:13:28.202679 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-23 01:13:28.202699 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:13:28.202709 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-23 01:13:28.202720 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-23 01:13:28.202730 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:13:28.202740 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-23 01:13:28.202750 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:13:28.202761 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-23 01:13:28.202777 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-23 01:13:28.202789 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-23 01:13:28.202800 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-23 01:13:28.202810 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-23 01:13:28.202820 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-23 01:13:28.202831 | orchestrator | 2025-05-23 01:13:28.202842 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-05-23 01:13:28.202853 | orchestrator | Friday 23 May 2025 01:10:23 +0000 (0:00:07.568) 0:05:16.029 ************ 2025-05-23 01:13:28.202865 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-23 01:13:28.202875 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-23 01:13:28.202919 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-23 01:13:28.202932 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-23 01:13:28.202942 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-23 01:13:28.202954 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-23 01:13:28.202964 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-23 01:13:28.202975 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-23 01:13:28.202985 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-23 01:13:28.202996 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-23 01:13:28.203007 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-23 01:13:28.203017 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-23 01:13:28.203027 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-23 01:13:28.203037 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:13:28.203049 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-23 01:13:28.203058 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:13:28.203069 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-23 01:13:28.203079 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:13:28.203089 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-23 01:13:28.203100 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-23 01:13:28.203120 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-23 01:13:28.203131 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-23 01:13:28.203142 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-23 01:13:28.203153 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-23 01:13:28.203164 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-23 01:13:28.203173 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-23 01:13:28.203183 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-23 01:13:28.203194 | orchestrator | 2025-05-23 01:13:28.203205 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-05-23 01:13:28.203215 | orchestrator | Friday 23 May 2025 01:10:34 +0000 (0:00:11.327) 0:05:27.357 ************ 2025-05-23 01:13:28.203225 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:13:28.203236 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:13:28.203247 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:13:28.203257 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:13:28.203267 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:13:28.203278 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:13:28.203288 | orchestrator | 2025-05-23 01:13:28.203297 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-05-23 01:13:28.203309 | orchestrator | Friday 23 May 2025 01:10:35 +0000 (0:00:00.854) 0:05:28.211 ************ 2025-05-23 01:13:28.203319 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:13:28.203329 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:13:28.203339 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:13:28.203350 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:13:28.203360 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:13:28.203370 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:13:28.203379 | orchestrator | 2025-05-23 01:13:28.203390 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-05-23 01:13:28.203400 | orchestrator | Friday 23 May 2025 01:10:36 +0000 (0:00:01.082) 0:05:29.293 ************ 2025-05-23 01:13:28.203410 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:13:28.203419 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:13:28.203429 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:13:28.203448 | orchestrator | changed: [testbed-node-4] 2025-05-23 01:13:28.203458 | orchestrator | changed: [testbed-node-3] 2025-05-23 01:13:28.203468 | orchestrator | changed: [testbed-node-5] 2025-05-23 01:13:28.203478 | orchestrator | 2025-05-23 01:13:28.203487 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-05-23 01:13:28.203497 | orchestrator | Friday 23 May 2025 01:10:39 +0000 (0:00:03.081) 0:05:32.375 ************ 2025-05-23 01:13:28.203547 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-23 01:13:28.203560 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-23 01:13:28.203582 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.203619 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.203632 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:13:28.203643 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.203660 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.203702 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.203723 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:13:28.203734 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-23 01:13:28.203747 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-23 01:13:28.203759 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.203775 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.203786 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:13:28.203819 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.203840 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.203852 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.203864 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:13:28.203876 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-23 01:13:28.203887 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-23 01:13:28.203903 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.203920 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.203938 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:13:28.203949 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.203959 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.203970 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.203982 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:13:28.204002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-23 01:13:28.204025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-23 01:13:28.204037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.204049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.204062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:13:28.204073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.204088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.204099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.204117 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:13:28.204134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-23 01:13:28.204146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-23 01:13:28.204158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-23 01:13:28.204169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-23 01:13:28.204185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.204203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.204220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.204231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.204242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:13:28.204254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:13:28.204265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.204281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.204306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.204317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.204328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.204340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.204351 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:13:28.204362 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:13:28.204374 | orchestrator | 2025-05-23 01:13:28.204385 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-05-23 01:13:28.204397 | orchestrator | Friday 23 May 2025 01:10:41 +0000 (0:00:01.897) 0:05:34.273 ************ 2025-05-23 01:13:28.204409 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-05-23 01:13:28.204419 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-05-23 01:13:28.204430 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:13:28.204441 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-05-23 01:13:28.204452 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-05-23 01:13:28.204470 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:13:28.204482 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-05-23 01:13:28.204493 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-05-23 01:13:28.204503 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:13:28.204515 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-05-23 01:13:28.204525 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-05-23 01:13:28.204537 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:13:28.204548 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-05-23 01:13:28.204560 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-05-23 01:13:28.204576 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:13:28.204586 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-05-23 01:13:28.204657 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-05-23 01:13:28.204669 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:13:28.204680 | orchestrator | 2025-05-23 01:13:28.204692 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-05-23 01:13:28.204701 | orchestrator | Friday 23 May 2025 01:10:42 +0000 (0:00:01.110) 0:05:35.383 ************ 2025-05-23 01:13:28.204720 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-23 01:13:28.204732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-23 01:13:28.204744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-23 01:13:28.204756 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-23 01:13:28.204782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-23 01:13:28.204798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-23 01:13:28.204808 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-23 01:13:28.204817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-23 01:13:28.204828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-23 01:13:28.204844 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-23 01:13:28.204858 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.204873 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.204884 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:13:28.204896 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.204907 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-23 01:13:28.204926 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.204937 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.204955 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:13:28.204971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-23 01:13:28.204983 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.204993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-23 01:13:28.205004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.205021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:13:28.205032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.205047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:13:28.205058 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-23 01:13:28.205073 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.205085 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.205096 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:13:28.205113 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.205125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-23 01:13:28.205139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-23 01:13:28.205150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-23 01:13:28.205164 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-23 01:13:28.205174 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.205191 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-23 01:13:28.205202 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.205217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-23 01:13:28.205232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.205243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.205254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-23 01:13:28.205270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.205282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.205296 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-23 01:13:28.205307 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.205323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-23 01:13:28.205334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.205352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-23 01:13:28.205364 | orchestrator | 2025-05-23 01:13:28.205375 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-23 01:13:28.205385 | orchestrator | Friday 23 May 2025 01:10:46 +0000 (0:00:03.460) 0:05:38.844 ************ 2025-05-23 01:13:28.205395 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:13:28.205405 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:13:28.205414 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:13:28.205425 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:13:28.205435 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:13:28.205445 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:13:28.205455 | orchestrator | 2025-05-23 01:13:28.205465 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-23 01:13:28.205475 | orchestrator | Friday 23 May 2025 01:10:47 +0000 (0:00:01.016) 0:05:39.860 ************ 2025-05-23 01:13:28.205485 | orchestrator | 2025-05-23 01:13:28.205496 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-23 01:13:28.205506 | orchestrator | Friday 23 May 2025 01:10:47 +0000 (0:00:00.135) 0:05:39.996 ************ 2025-05-23 01:13:28.205517 | orchestrator | 2025-05-23 01:13:28.205527 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-23 01:13:28.205538 | orchestrator | Friday 23 May 2025 01:10:47 +0000 (0:00:00.332) 0:05:40.328 ************ 2025-05-23 01:13:28.205548 | orchestrator | 2025-05-23 01:13:28.205560 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-23 01:13:28.205570 | orchestrator | Friday 23 May 2025 01:10:47 +0000 (0:00:00.113) 0:05:40.442 ************ 2025-05-23 01:13:28.205579 | orchestrator | 2025-05-23 01:13:28.205608 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-23 01:13:28.205625 | orchestrator | Friday 23 May 2025 01:10:48 +0000 (0:00:00.362) 0:05:40.805 ************ 2025-05-23 01:13:28.205636 | orchestrator | 2025-05-23 01:13:28.205647 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-23 01:13:28.205658 | orchestrator | Friday 23 May 2025 01:10:48 +0000 (0:00:00.109) 0:05:40.915 ************ 2025-05-23 01:13:28.205668 | orchestrator | 2025-05-23 01:13:28.205678 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-05-23 01:13:28.205688 | orchestrator | Friday 23 May 2025 01:10:48 +0000 (0:00:00.346) 0:05:41.262 ************ 2025-05-23 01:13:28.205698 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:13:28.205708 | orchestrator | changed: [testbed-node-2] 2025-05-23 01:13:28.205718 | orchestrator | changed: [testbed-node-1] 2025-05-23 01:13:28.205729 | orchestrator | 2025-05-23 01:13:28.205738 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-05-23 01:13:28.205748 | orchestrator | Friday 23 May 2025 01:10:56 +0000 (0:00:07.790) 0:05:49.052 ************ 2025-05-23 01:13:28.205766 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:13:28.205777 | orchestrator | changed: [testbed-node-2] 2025-05-23 01:13:28.205788 | orchestrator | changed: [testbed-node-1] 2025-05-23 01:13:28.205797 | orchestrator | 2025-05-23 01:13:28.205806 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-05-23 01:13:28.205816 | orchestrator | Friday 23 May 2025 01:11:08 +0000 (0:00:11.805) 0:06:00.857 ************ 2025-05-23 01:13:28.205833 | orchestrator | changed: [testbed-node-3] 2025-05-23 01:13:28.205843 | orchestrator | changed: [testbed-node-5] 2025-05-23 01:13:28.205853 | orchestrator | changed: [testbed-node-4] 2025-05-23 01:13:28.205862 | orchestrator | 2025-05-23 01:13:28.205871 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-05-23 01:13:28.205882 | orchestrator | Friday 23 May 2025 01:11:29 +0000 (0:00:20.920) 0:06:21.778 ************ 2025-05-23 01:13:28.205893 | orchestrator | changed: [testbed-node-3] 2025-05-23 01:13:28.205903 | orchestrator | changed: [testbed-node-4] 2025-05-23 01:13:28.205914 | orchestrator | changed: [testbed-node-5] 2025-05-23 01:13:28.205923 | orchestrator | 2025-05-23 01:13:28.205933 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-05-23 01:13:28.205943 | orchestrator | Friday 23 May 2025 01:11:55 +0000 (0:00:26.497) 0:06:48.275 ************ 2025-05-23 01:13:28.205952 | orchestrator | changed: [testbed-node-3] 2025-05-23 01:13:28.205963 | orchestrator | changed: [testbed-node-4] 2025-05-23 01:13:28.205972 | orchestrator | changed: [testbed-node-5] 2025-05-23 01:13:28.205981 | orchestrator | 2025-05-23 01:13:28.205991 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-05-23 01:13:28.206001 | orchestrator | Friday 23 May 2025 01:11:56 +0000 (0:00:00.768) 0:06:49.044 ************ 2025-05-23 01:13:28.206011 | orchestrator | changed: [testbed-node-3] 2025-05-23 01:13:28.206055 | orchestrator | changed: [testbed-node-4] 2025-05-23 01:13:28.206067 | orchestrator | changed: [testbed-node-5] 2025-05-23 01:13:28.206078 | orchestrator | 2025-05-23 01:13:28.206090 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-05-23 01:13:28.206101 | orchestrator | Friday 23 May 2025 01:11:57 +0000 (0:00:00.973) 0:06:50.017 ************ 2025-05-23 01:13:28.206113 | orchestrator | changed: [testbed-node-4] 2025-05-23 01:13:28.206125 | orchestrator | changed: [testbed-node-3] 2025-05-23 01:13:28.206136 | orchestrator | changed: [testbed-node-5] 2025-05-23 01:13:28.206146 | orchestrator | 2025-05-23 01:13:28.206157 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-05-23 01:13:28.206167 | orchestrator | Friday 23 May 2025 01:12:20 +0000 (0:00:22.924) 0:07:12.941 ************ 2025-05-23 01:13:28.206178 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:13:28.206189 | orchestrator | 2025-05-23 01:13:28.206199 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-05-23 01:13:28.206210 | orchestrator | Friday 23 May 2025 01:12:20 +0000 (0:00:00.128) 0:07:13.069 ************ 2025-05-23 01:13:28.206221 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:13:28.206231 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:13:28.206242 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:13:28.206252 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:13:28.206262 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:13:28.206274 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-05-23 01:13:28.206285 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-23 01:13:28.206296 | orchestrator | 2025-05-23 01:13:28.206308 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-05-23 01:13:28.206318 | orchestrator | Friday 23 May 2025 01:12:42 +0000 (0:00:22.101) 0:07:35.171 ************ 2025-05-23 01:13:28.206328 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:13:28.206339 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:13:28.206350 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:13:28.206370 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:13:28.206380 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:13:28.206391 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:13:28.206403 | orchestrator | 2025-05-23 01:13:28.206414 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-05-23 01:13:28.206425 | orchestrator | Friday 23 May 2025 01:12:53 +0000 (0:00:10.824) 0:07:45.996 ************ 2025-05-23 01:13:28.206436 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:13:28.206447 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:13:28.206458 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:13:28.206468 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:13:28.206479 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:13:28.206489 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2025-05-23 01:13:28.206500 | orchestrator | 2025-05-23 01:13:28.206510 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-23 01:13:28.206521 | orchestrator | Friday 23 May 2025 01:12:56 +0000 (0:00:03.087) 0:07:49.084 ************ 2025-05-23 01:13:28.206531 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-23 01:13:28.206541 | orchestrator | 2025-05-23 01:13:28.206552 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-23 01:13:28.206570 | orchestrator | Friday 23 May 2025 01:13:06 +0000 (0:00:10.308) 0:07:59.392 ************ 2025-05-23 01:13:28.206581 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-23 01:13:28.206609 | orchestrator | 2025-05-23 01:13:28.206620 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-05-23 01:13:28.206630 | orchestrator | Friday 23 May 2025 01:13:08 +0000 (0:00:01.176) 0:08:00.568 ************ 2025-05-23 01:13:28.206640 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:13:28.206650 | orchestrator | 2025-05-23 01:13:28.206659 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-05-23 01:13:28.206669 | orchestrator | Friday 23 May 2025 01:13:09 +0000 (0:00:01.139) 0:08:01.708 ************ 2025-05-23 01:13:28.206679 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-23 01:13:28.206688 | orchestrator | 2025-05-23 01:13:28.206698 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-05-23 01:13:28.206707 | orchestrator | Friday 23 May 2025 01:13:18 +0000 (0:00:09.238) 0:08:10.947 ************ 2025-05-23 01:13:28.206717 | orchestrator | ok: [testbed-node-3] 2025-05-23 01:13:28.206726 | orchestrator | ok: [testbed-node-4] 2025-05-23 01:13:28.206736 | orchestrator | ok: [testbed-node-5] 2025-05-23 01:13:28.206745 | orchestrator | ok: [testbed-node-0] 2025-05-23 01:13:28.206755 | orchestrator | ok: [testbed-node-1] 2025-05-23 01:13:28.206765 | orchestrator | ok: [testbed-node-2] 2025-05-23 01:13:28.206775 | orchestrator | 2025-05-23 01:13:28.206795 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-05-23 01:13:28.206806 | orchestrator | 2025-05-23 01:13:28.206816 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-05-23 01:13:28.206826 | orchestrator | Friday 23 May 2025 01:13:20 +0000 (0:00:02.114) 0:08:13.061 ************ 2025-05-23 01:13:28.206835 | orchestrator | changed: [testbed-node-0] 2025-05-23 01:13:28.206845 | orchestrator | changed: [testbed-node-1] 2025-05-23 01:13:28.206855 | orchestrator | changed: [testbed-node-2] 2025-05-23 01:13:28.206865 | orchestrator | 2025-05-23 01:13:28.206874 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-05-23 01:13:28.206884 | orchestrator | 2025-05-23 01:13:28.206895 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-05-23 01:13:28.206907 | orchestrator | Friday 23 May 2025 01:13:21 +0000 (0:00:01.028) 0:08:14.090 ************ 2025-05-23 01:13:28.206917 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:13:28.206927 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:13:28.206938 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:13:28.206946 | orchestrator | 2025-05-23 01:13:28.206956 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-05-23 01:13:28.206974 | orchestrator | 2025-05-23 01:13:28.206984 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-05-23 01:13:28.206993 | orchestrator | Friday 23 May 2025 01:13:22 +0000 (0:00:00.825) 0:08:14.916 ************ 2025-05-23 01:13:28.207002 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-05-23 01:13:28.207011 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-05-23 01:13:28.207021 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-05-23 01:13:28.207031 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-05-23 01:13:28.207040 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-05-23 01:13:28.207050 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-05-23 01:13:28.207060 | orchestrator | skipping: [testbed-node-3] 2025-05-23 01:13:28.207069 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-05-23 01:13:28.207079 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-05-23 01:13:28.207088 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-05-23 01:13:28.207097 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-05-23 01:13:28.207106 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-05-23 01:13:28.207115 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-05-23 01:13:28.207125 | orchestrator | skipping: [testbed-node-4] 2025-05-23 01:13:28.207134 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-05-23 01:13:28.207143 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-05-23 01:13:28.207152 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-05-23 01:13:28.207162 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-05-23 01:13:28.207171 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-05-23 01:13:28.207180 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-05-23 01:13:28.207190 | orchestrator | skipping: [testbed-node-5] 2025-05-23 01:13:28.207201 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-05-23 01:13:28.207211 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-05-23 01:13:28.207221 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-05-23 01:13:28.207231 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-05-23 01:13:28.207241 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-05-23 01:13:28.207252 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-05-23 01:13:28.207263 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:13:28.207272 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-05-23 01:13:28.207282 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-05-23 01:13:28.207292 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-05-23 01:13:28.207302 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-05-23 01:13:28.207311 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-05-23 01:13:28.207321 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-05-23 01:13:28.207331 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:13:28.207340 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-05-23 01:13:28.207363 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-05-23 01:13:28.207373 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-05-23 01:13:28.207382 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-05-23 01:13:28.207393 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-05-23 01:13:28.207402 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-05-23 01:13:28.207420 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:13:28.207431 | orchestrator | 2025-05-23 01:13:28.207442 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-05-23 01:13:28.207452 | orchestrator | 2025-05-23 01:13:28.207462 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-05-23 01:13:28.207473 | orchestrator | Friday 23 May 2025 01:13:23 +0000 (0:00:01.359) 0:08:16.275 ************ 2025-05-23 01:13:28.207483 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-05-23 01:13:28.207493 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-05-23 01:13:28.207503 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:13:28.207513 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-05-23 01:13:28.207524 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-05-23 01:13:28.207535 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:13:28.207554 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-05-23 01:13:28.207565 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-05-23 01:13:28.207575 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:13:28.207585 | orchestrator | 2025-05-23 01:13:28.207616 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-05-23 01:13:28.207627 | orchestrator | 2025-05-23 01:13:28.207638 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-05-23 01:13:28.207648 | orchestrator | Friday 23 May 2025 01:13:24 +0000 (0:00:00.831) 0:08:17.107 ************ 2025-05-23 01:13:28.207659 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:13:28.207667 | orchestrator | 2025-05-23 01:13:28.207673 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-05-23 01:13:28.207680 | orchestrator | 2025-05-23 01:13:28.207686 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-05-23 01:13:28.207692 | orchestrator | Friday 23 May 2025 01:13:25 +0000 (0:00:00.925) 0:08:18.033 ************ 2025-05-23 01:13:28.207698 | orchestrator | skipping: [testbed-node-0] 2025-05-23 01:13:28.207704 | orchestrator | skipping: [testbed-node-1] 2025-05-23 01:13:28.207710 | orchestrator | skipping: [testbed-node-2] 2025-05-23 01:13:28.207716 | orchestrator | 2025-05-23 01:13:28.207722 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-23 01:13:28.207729 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-23 01:13:28.207736 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-05-23 01:13:28.207743 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-05-23 01:13:28.207749 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-05-23 01:13:28.207755 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-05-23 01:13:28.207761 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-05-23 01:13:28.207767 | orchestrator | testbed-node-5 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-05-23 01:13:28.207773 | orchestrator | 2025-05-23 01:13:28.207779 | orchestrator | 2025-05-23 01:13:28.207785 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-23 01:13:28.207792 | orchestrator | Friday 23 May 2025 01:13:26 +0000 (0:00:00.589) 0:08:18.622 ************ 2025-05-23 01:13:28.207798 | orchestrator | =============================================================================== 2025-05-23 01:13:28.207811 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 27.47s 2025-05-23 01:13:28.207817 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 26.50s 2025-05-23 01:13:28.207823 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 23.27s 2025-05-23 01:13:28.207829 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 22.92s 2025-05-23 01:13:28.207835 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.10s 2025-05-23 01:13:28.207841 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 20.92s 2025-05-23 01:13:28.207847 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 19.80s 2025-05-23 01:13:28.207853 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 15.60s 2025-05-23 01:13:28.207859 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 13.59s 2025-05-23 01:13:28.207865 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 11.81s 2025-05-23 01:13:28.207871 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.33s 2025-05-23 01:13:28.207883 | orchestrator | nova-cell : Copying files for nova-ssh --------------------------------- 11.33s 2025-05-23 01:13:28.207889 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 10.82s 2025-05-23 01:13:28.207895 | orchestrator | nova-cell : Create cell ------------------------------------------------ 10.45s 2025-05-23 01:13:28.207901 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.31s 2025-05-23 01:13:28.207907 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.06s 2025-05-23 01:13:28.207913 | orchestrator | nova-cell : Discover nova hosts ----------------------------------------- 9.24s 2025-05-23 01:13:28.207919 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.65s 2025-05-23 01:13:28.207925 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 8.13s 2025-05-23 01:13:28.207931 | orchestrator | nova-cell : Copying over nova.conf -------------------------------------- 8.06s 2025-05-23 01:13:31.241012 | orchestrator | 2025-05-23 01:13:31 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:13:31.241118 | orchestrator | 2025-05-23 01:13:31 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:13:34.297764 | orchestrator | 2025-05-23 01:13:34 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:13:34.297867 | orchestrator | 2025-05-23 01:13:34 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:13:37.345804 | orchestrator | 2025-05-23 01:13:37 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:13:37.345910 | orchestrator | 2025-05-23 01:13:37 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:13:40.398434 | orchestrator | 2025-05-23 01:13:40 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:13:40.398545 | orchestrator | 2025-05-23 01:13:40 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:13:43.451866 | orchestrator | 2025-05-23 01:13:43 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:13:43.451979 | orchestrator | 2025-05-23 01:13:43 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:13:46.504081 | orchestrator | 2025-05-23 01:13:46 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:13:46.504189 | orchestrator | 2025-05-23 01:13:46 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:13:49.546961 | orchestrator | 2025-05-23 01:13:49 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:13:49.547069 | orchestrator | 2025-05-23 01:13:49 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:13:52.595087 | orchestrator | 2025-05-23 01:13:52 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:13:52.595187 | orchestrator | 2025-05-23 01:13:52 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:13:55.645755 | orchestrator | 2025-05-23 01:13:55 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:13:55.645883 | orchestrator | 2025-05-23 01:13:55 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:13:58.696031 | orchestrator | 2025-05-23 01:13:58 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:13:58.696133 | orchestrator | 2025-05-23 01:13:58 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:14:01.747015 | orchestrator | 2025-05-23 01:14:01 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:14:01.747127 | orchestrator | 2025-05-23 01:14:01 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:14:04.799496 | orchestrator | 2025-05-23 01:14:04 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:14:04.799642 | orchestrator | 2025-05-23 01:14:04 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:14:07.846186 | orchestrator | 2025-05-23 01:14:07 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:14:07.846292 | orchestrator | 2025-05-23 01:14:07 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:14:10.888837 | orchestrator | 2025-05-23 01:14:10 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:14:10.888938 | orchestrator | 2025-05-23 01:14:10 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:14:13.947262 | orchestrator | 2025-05-23 01:14:13 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:14:13.947375 | orchestrator | 2025-05-23 01:14:13 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:14:17.001401 | orchestrator | 2025-05-23 01:14:17 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:14:17.001542 | orchestrator | 2025-05-23 01:14:17 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:14:20.056612 | orchestrator | 2025-05-23 01:14:20 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:14:20.056723 | orchestrator | 2025-05-23 01:14:20 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:14:23.101348 | orchestrator | 2025-05-23 01:14:23 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:14:23.101480 | orchestrator | 2025-05-23 01:14:23 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:14:26.151260 | orchestrator | 2025-05-23 01:14:26 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:14:26.151373 | orchestrator | 2025-05-23 01:14:26 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:14:29.202886 | orchestrator | 2025-05-23 01:14:29 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:14:29.202994 | orchestrator | 2025-05-23 01:14:29 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:14:32.252915 | orchestrator | 2025-05-23 01:14:32 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:14:32.253026 | orchestrator | 2025-05-23 01:14:32 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:14:35.300041 | orchestrator | 2025-05-23 01:14:35 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:14:35.300155 | orchestrator | 2025-05-23 01:14:35 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:14:38.350346 | orchestrator | 2025-05-23 01:14:38 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:14:38.350470 | orchestrator | 2025-05-23 01:14:38 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:14:41.400155 | orchestrator | 2025-05-23 01:14:41 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:14:41.400259 | orchestrator | 2025-05-23 01:14:41 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:14:44.460305 | orchestrator | 2025-05-23 01:14:44 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:14:44.460433 | orchestrator | 2025-05-23 01:14:44 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:14:47.507893 | orchestrator | 2025-05-23 01:14:47 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:14:47.508005 | orchestrator | 2025-05-23 01:14:47 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:14:50.553893 | orchestrator | 2025-05-23 01:14:50 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:14:50.554120 | orchestrator | 2025-05-23 01:14:50 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:14:53.600531 | orchestrator | 2025-05-23 01:14:53 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:14:53.600686 | orchestrator | 2025-05-23 01:14:53 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:14:56.655839 | orchestrator | 2025-05-23 01:14:56 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:14:56.655954 | orchestrator | 2025-05-23 01:14:56 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:14:59.705055 | orchestrator | 2025-05-23 01:14:59 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:14:59.705156 | orchestrator | 2025-05-23 01:14:59 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:15:02.759319 | orchestrator | 2025-05-23 01:15:02 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:15:02.759429 | orchestrator | 2025-05-23 01:15:02 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:15:05.804655 | orchestrator | 2025-05-23 01:15:05 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:15:05.804758 | orchestrator | 2025-05-23 01:15:05 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:15:08.856209 | orchestrator | 2025-05-23 01:15:08 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:15:08.856315 | orchestrator | 2025-05-23 01:15:08 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:15:11.923868 | orchestrator | 2025-05-23 01:15:11 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:15:11.923970 | orchestrator | 2025-05-23 01:15:11 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:15:14.976371 | orchestrator | 2025-05-23 01:15:14 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:15:14.976532 | orchestrator | 2025-05-23 01:15:14 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:15:18.033635 | orchestrator | 2025-05-23 01:15:18 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:15:18.033782 | orchestrator | 2025-05-23 01:15:18 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:15:21.080811 | orchestrator | 2025-05-23 01:15:21 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:15:21.080948 | orchestrator | 2025-05-23 01:15:21 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:15:24.143880 | orchestrator | 2025-05-23 01:15:24 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:15:24.143987 | orchestrator | 2025-05-23 01:15:24 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:15:27.200418 | orchestrator | 2025-05-23 01:15:27 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:15:27.200526 | orchestrator | 2025-05-23 01:15:27 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:15:30.254123 | orchestrator | 2025-05-23 01:15:30 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:15:30.254375 | orchestrator | 2025-05-23 01:15:30 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:15:33.300763 | orchestrator | 2025-05-23 01:15:33 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:15:33.300863 | orchestrator | 2025-05-23 01:15:33 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:15:36.350349 | orchestrator | 2025-05-23 01:15:36 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:15:36.350441 | orchestrator | 2025-05-23 01:15:36 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:15:39.399117 | orchestrator | 2025-05-23 01:15:39 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:15:39.399212 | orchestrator | 2025-05-23 01:15:39 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:15:42.456355 | orchestrator | 2025-05-23 01:15:42 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:15:42.456464 | orchestrator | 2025-05-23 01:15:42 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:15:45.515260 | orchestrator | 2025-05-23 01:15:45 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:15:45.515357 | orchestrator | 2025-05-23 01:15:45 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:15:48.568656 | orchestrator | 2025-05-23 01:15:48 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:15:48.568768 | orchestrator | 2025-05-23 01:15:48 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:15:51.618743 | orchestrator | 2025-05-23 01:15:51 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:15:51.618850 | orchestrator | 2025-05-23 01:15:51 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:15:54.666790 | orchestrator | 2025-05-23 01:15:54 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:15:54.666912 | orchestrator | 2025-05-23 01:15:54 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:15:57.713523 | orchestrator | 2025-05-23 01:15:57 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:15:57.713697 | orchestrator | 2025-05-23 01:15:57 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:16:00.760269 | orchestrator | 2025-05-23 01:16:00 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:16:00.760381 | orchestrator | 2025-05-23 01:16:00 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:16:03.812175 | orchestrator | 2025-05-23 01:16:03 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:16:03.812343 | orchestrator | 2025-05-23 01:16:03 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:16:06.857123 | orchestrator | 2025-05-23 01:16:06 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:16:06.857232 | orchestrator | 2025-05-23 01:16:06 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:16:09.896831 | orchestrator | 2025-05-23 01:16:09 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:16:09.896931 | orchestrator | 2025-05-23 01:16:09 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:16:12.950284 | orchestrator | 2025-05-23 01:16:12 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:16:12.950410 | orchestrator | 2025-05-23 01:16:12 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:16:15.999918 | orchestrator | 2025-05-23 01:16:15 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:16:16.000021 | orchestrator | 2025-05-23 01:16:16 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:16:19.054308 | orchestrator | 2025-05-23 01:16:19 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:16:19.054433 | orchestrator | 2025-05-23 01:16:19 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:16:22.102285 | orchestrator | 2025-05-23 01:16:22 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:16:22.102389 | orchestrator | 2025-05-23 01:16:22 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:16:25.150486 | orchestrator | 2025-05-23 01:16:25 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:16:25.150676 | orchestrator | 2025-05-23 01:16:25 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:16:28.199423 | orchestrator | 2025-05-23 01:16:28 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:16:28.199550 | orchestrator | 2025-05-23 01:16:28 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:16:31.252835 | orchestrator | 2025-05-23 01:16:31 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:16:31.252947 | orchestrator | 2025-05-23 01:16:31 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:16:34.302336 | orchestrator | 2025-05-23 01:16:34 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:16:34.302438 | orchestrator | 2025-05-23 01:16:34 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:16:37.348880 | orchestrator | 2025-05-23 01:16:37 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:16:37.348986 | orchestrator | 2025-05-23 01:16:37 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:16:40.398921 | orchestrator | 2025-05-23 01:16:40 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:16:40.399030 | orchestrator | 2025-05-23 01:16:40 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:16:43.448836 | orchestrator | 2025-05-23 01:16:43 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:16:43.448943 | orchestrator | 2025-05-23 01:16:43 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:16:46.501024 | orchestrator | 2025-05-23 01:16:46 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:16:46.501127 | orchestrator | 2025-05-23 01:16:46 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:16:49.548976 | orchestrator | 2025-05-23 01:16:49 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:16:49.549091 | orchestrator | 2025-05-23 01:16:49 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:16:52.600543 | orchestrator | 2025-05-23 01:16:52 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:16:52.600693 | orchestrator | 2025-05-23 01:16:52 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:16:55.649941 | orchestrator | 2025-05-23 01:16:55 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:16:55.650336 | orchestrator | 2025-05-23 01:16:55 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:16:58.707004 | orchestrator | 2025-05-23 01:16:58 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:16:58.707102 | orchestrator | 2025-05-23 01:16:58 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:17:01.754680 | orchestrator | 2025-05-23 01:17:01 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:17:01.754788 | orchestrator | 2025-05-23 01:17:01 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:17:04.803181 | orchestrator | 2025-05-23 01:17:04 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:17:04.803288 | orchestrator | 2025-05-23 01:17:04 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:17:07.858739 | orchestrator | 2025-05-23 01:17:07 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:17:07.858840 | orchestrator | 2025-05-23 01:17:07 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:17:10.905549 | orchestrator | 2025-05-23 01:17:10 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:17:10.905739 | orchestrator | 2025-05-23 01:17:10 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:17:13.956784 | orchestrator | 2025-05-23 01:17:13 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:17:13.956889 | orchestrator | 2025-05-23 01:17:13 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:17:17.002205 | orchestrator | 2025-05-23 01:17:17 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:17:17.002314 | orchestrator | 2025-05-23 01:17:17 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:17:20.052413 | orchestrator | 2025-05-23 01:17:20 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:17:20.052537 | orchestrator | 2025-05-23 01:17:20 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:17:23.107214 | orchestrator | 2025-05-23 01:17:23 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:17:23.107376 | orchestrator | 2025-05-23 01:17:23 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:17:26.160965 | orchestrator | 2025-05-23 01:17:26 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:17:26.161097 | orchestrator | 2025-05-23 01:17:26 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:17:29.201348 | orchestrator | 2025-05-23 01:17:29 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:17:29.201485 | orchestrator | 2025-05-23 01:17:29 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:17:32.249935 | orchestrator | 2025-05-23 01:17:32 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:17:32.250091 | orchestrator | 2025-05-23 01:17:32 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:17:35.299319 | orchestrator | 2025-05-23 01:17:35 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:17:35.299443 | orchestrator | 2025-05-23 01:17:35 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:17:38.352162 | orchestrator | 2025-05-23 01:17:38 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:17:38.352261 | orchestrator | 2025-05-23 01:17:38 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:17:41.405328 | orchestrator | 2025-05-23 01:17:41 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:17:41.405440 | orchestrator | 2025-05-23 01:17:41 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:17:44.458281 | orchestrator | 2025-05-23 01:17:44 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:17:44.458385 | orchestrator | 2025-05-23 01:17:44 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:17:47.496480 | orchestrator | 2025-05-23 01:17:47 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:17:47.496572 | orchestrator | 2025-05-23 01:17:47 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:17:50.549398 | orchestrator | 2025-05-23 01:17:50 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:17:50.549504 | orchestrator | 2025-05-23 01:17:50 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:17:53.599857 | orchestrator | 2025-05-23 01:17:53 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:17:53.599985 | orchestrator | 2025-05-23 01:17:53 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:17:56.648353 | orchestrator | 2025-05-23 01:17:56 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:17:56.648460 | orchestrator | 2025-05-23 01:17:56 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:17:59.693168 | orchestrator | 2025-05-23 01:17:59 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:17:59.693275 | orchestrator | 2025-05-23 01:17:59 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:18:02.740144 | orchestrator | 2025-05-23 01:18:02 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:18:02.740258 | orchestrator | 2025-05-23 01:18:02 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:18:05.782726 | orchestrator | 2025-05-23 01:18:05 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:18:05.782831 | orchestrator | 2025-05-23 01:18:05 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:18:08.838968 | orchestrator | 2025-05-23 01:18:08 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:18:08.839087 | orchestrator | 2025-05-23 01:18:08 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:18:11.895665 | orchestrator | 2025-05-23 01:18:11 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:18:11.895781 | orchestrator | 2025-05-23 01:18:11 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:18:14.947512 | orchestrator | 2025-05-23 01:18:14 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:18:14.947668 | orchestrator | 2025-05-23 01:18:14 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:18:17.997243 | orchestrator | 2025-05-23 01:18:17 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:18:17.997347 | orchestrator | 2025-05-23 01:18:17 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:18:21.045393 | orchestrator | 2025-05-23 01:18:21 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:18:21.045502 | orchestrator | 2025-05-23 01:18:21 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:18:24.095300 | orchestrator | 2025-05-23 01:18:24 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:18:24.095426 | orchestrator | 2025-05-23 01:18:24 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:18:27.143096 | orchestrator | 2025-05-23 01:18:27 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:18:27.143202 | orchestrator | 2025-05-23 01:18:27 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:18:30.186253 | orchestrator | 2025-05-23 01:18:30 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:18:30.186511 | orchestrator | 2025-05-23 01:18:30 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:18:33.234539 | orchestrator | 2025-05-23 01:18:33 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:18:33.234694 | orchestrator | 2025-05-23 01:18:33 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:18:36.277170 | orchestrator | 2025-05-23 01:18:36 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:18:36.277271 | orchestrator | 2025-05-23 01:18:36 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:18:39.335233 | orchestrator | 2025-05-23 01:18:39 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:18:39.335352 | orchestrator | 2025-05-23 01:18:39 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:18:42.379747 | orchestrator | 2025-05-23 01:18:42 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:18:42.379858 | orchestrator | 2025-05-23 01:18:42 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:18:45.432539 | orchestrator | 2025-05-23 01:18:45 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:18:45.432694 | orchestrator | 2025-05-23 01:18:45 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:18:48.484246 | orchestrator | 2025-05-23 01:18:48 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:18:48.484353 | orchestrator | 2025-05-23 01:18:48 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:18:51.537458 | orchestrator | 2025-05-23 01:18:51 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:18:51.537561 | orchestrator | 2025-05-23 01:18:51 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:18:54.588378 | orchestrator | 2025-05-23 01:18:54 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:18:54.588503 | orchestrator | 2025-05-23 01:18:54 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:18:57.637898 | orchestrator | 2025-05-23 01:18:57 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:18:57.637994 | orchestrator | 2025-05-23 01:18:57 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:19:00.687923 | orchestrator | 2025-05-23 01:19:00 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:19:00.688027 | orchestrator | 2025-05-23 01:19:00 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:19:03.738286 | orchestrator | 2025-05-23 01:19:03 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:19:03.738421 | orchestrator | 2025-05-23 01:19:03 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:19:06.780405 | orchestrator | 2025-05-23 01:19:06 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:19:06.780525 | orchestrator | 2025-05-23 01:19:06 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:19:09.830713 | orchestrator | 2025-05-23 01:19:09 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:19:09.830811 | orchestrator | 2025-05-23 01:19:09 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:19:12.884206 | orchestrator | 2025-05-23 01:19:12 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:19:12.884319 | orchestrator | 2025-05-23 01:19:12 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:19:15.945477 | orchestrator | 2025-05-23 01:19:15 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:19:15.945581 | orchestrator | 2025-05-23 01:19:15 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:19:19.002857 | orchestrator | 2025-05-23 01:19:19 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:19:19.002965 | orchestrator | 2025-05-23 01:19:19 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:19:22.068843 | orchestrator | 2025-05-23 01:19:22 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:19:22.068962 | orchestrator | 2025-05-23 01:19:22 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:19:25.118792 | orchestrator | 2025-05-23 01:19:25 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:19:25.118907 | orchestrator | 2025-05-23 01:19:25 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:19:28.163281 | orchestrator | 2025-05-23 01:19:28 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:19:28.163389 | orchestrator | 2025-05-23 01:19:28 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:19:31.223043 | orchestrator | 2025-05-23 01:19:31 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:19:31.223144 | orchestrator | 2025-05-23 01:19:31 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:19:34.290375 | orchestrator | 2025-05-23 01:19:34 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:19:34.290493 | orchestrator | 2025-05-23 01:19:34 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:19:37.336055 | orchestrator | 2025-05-23 01:19:37 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:19:37.336155 | orchestrator | 2025-05-23 01:19:37 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:19:40.397540 | orchestrator | 2025-05-23 01:19:40 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:19:40.397614 | orchestrator | 2025-05-23 01:19:40 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:19:43.452368 | orchestrator | 2025-05-23 01:19:43 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:19:43.452471 | orchestrator | 2025-05-23 01:19:43 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:19:46.517207 | orchestrator | 2025-05-23 01:19:46 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:19:46.517319 | orchestrator | 2025-05-23 01:19:46 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:19:49.584228 | orchestrator | 2025-05-23 01:19:49 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:19:49.584388 | orchestrator | 2025-05-23 01:19:49 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:19:52.641580 | orchestrator | 2025-05-23 01:19:52 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:19:52.641714 | orchestrator | 2025-05-23 01:19:52 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:19:55.713956 | orchestrator | 2025-05-23 01:19:55 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:19:55.714141 | orchestrator | 2025-05-23 01:19:55 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:19:58.783873 | orchestrator | 2025-05-23 01:19:58 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:19:58.783980 | orchestrator | 2025-05-23 01:19:58 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:20:01.864315 | orchestrator | 2025-05-23 01:20:01 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:20:01.864416 | orchestrator | 2025-05-23 01:20:01 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:20:04.937828 | orchestrator | 2025-05-23 01:20:04 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:20:04.937983 | orchestrator | 2025-05-23 01:20:04 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:20:08.007820 | orchestrator | 2025-05-23 01:20:08 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:20:08.007918 | orchestrator | 2025-05-23 01:20:08 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:20:11.062203 | orchestrator | 2025-05-23 01:20:11 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:20:11.062309 | orchestrator | 2025-05-23 01:20:11 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:20:14.119503 | orchestrator | 2025-05-23 01:20:14 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:20:14.119612 | orchestrator | 2025-05-23 01:20:14 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:20:17.172032 | orchestrator | 2025-05-23 01:20:17 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:20:17.172148 | orchestrator | 2025-05-23 01:20:17 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:20:20.224762 | orchestrator | 2025-05-23 01:20:20 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:20:20.224866 | orchestrator | 2025-05-23 01:20:20 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:20:23.274447 | orchestrator | 2025-05-23 01:20:23 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:20:23.274553 | orchestrator | 2025-05-23 01:20:23 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:20:26.318482 | orchestrator | 2025-05-23 01:20:26 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:20:26.318588 | orchestrator | 2025-05-23 01:20:26 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:20:29.366342 | orchestrator | 2025-05-23 01:20:29 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:20:29.366447 | orchestrator | 2025-05-23 01:20:29 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:20:32.416764 | orchestrator | 2025-05-23 01:20:32 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:20:32.416873 | orchestrator | 2025-05-23 01:20:32 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:20:35.467461 | orchestrator | 2025-05-23 01:20:35 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:20:35.467571 | orchestrator | 2025-05-23 01:20:35 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:20:38.519406 | orchestrator | 2025-05-23 01:20:38 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:20:38.519507 | orchestrator | 2025-05-23 01:20:38 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:20:41.572409 | orchestrator | 2025-05-23 01:20:41 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:20:41.572570 | orchestrator | 2025-05-23 01:20:41 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:20:44.624408 | orchestrator | 2025-05-23 01:20:44 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:20:44.624510 | orchestrator | 2025-05-23 01:20:44 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:20:47.678836 | orchestrator | 2025-05-23 01:20:47 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:20:47.678944 | orchestrator | 2025-05-23 01:20:47 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:20:50.729216 | orchestrator | 2025-05-23 01:20:50 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:20:50.729321 | orchestrator | 2025-05-23 01:20:50 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:20:53.781889 | orchestrator | 2025-05-23 01:20:53 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:20:53.782080 | orchestrator | 2025-05-23 01:20:53 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:20:56.831503 | orchestrator | 2025-05-23 01:20:56 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:20:56.831600 | orchestrator | 2025-05-23 01:20:56 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:20:59.882441 | orchestrator | 2025-05-23 01:20:59 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:20:59.882559 | orchestrator | 2025-05-23 01:20:59 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:21:02.931263 | orchestrator | 2025-05-23 01:21:02 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:21:02.931385 | orchestrator | 2025-05-23 01:21:02 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:21:05.984770 | orchestrator | 2025-05-23 01:21:05 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:21:05.984892 | orchestrator | 2025-05-23 01:21:05 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:21:09.036811 | orchestrator | 2025-05-23 01:21:09 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:21:09.036911 | orchestrator | 2025-05-23 01:21:09 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:21:12.089651 | orchestrator | 2025-05-23 01:21:12 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:21:12.089825 | orchestrator | 2025-05-23 01:21:12 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:21:15.140929 | orchestrator | 2025-05-23 01:21:15 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:21:15.141053 | orchestrator | 2025-05-23 01:21:15 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:21:18.192654 | orchestrator | 2025-05-23 01:21:18 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:21:18.192836 | orchestrator | 2025-05-23 01:21:18 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:21:21.247311 | orchestrator | 2025-05-23 01:21:21 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:21:21.247418 | orchestrator | 2025-05-23 01:21:21 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:21:24.297625 | orchestrator | 2025-05-23 01:21:24 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:21:24.297780 | orchestrator | 2025-05-23 01:21:24 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:21:27.340828 | orchestrator | 2025-05-23 01:21:27 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:21:27.340942 | orchestrator | 2025-05-23 01:21:27 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:21:30.396087 | orchestrator | 2025-05-23 01:21:30 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:21:30.396202 | orchestrator | 2025-05-23 01:21:30 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:21:33.454109 | orchestrator | 2025-05-23 01:21:33 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:21:33.454227 | orchestrator | 2025-05-23 01:21:33 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:21:36.501380 | orchestrator | 2025-05-23 01:21:36 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:21:36.501494 | orchestrator | 2025-05-23 01:21:36 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:21:39.548144 | orchestrator | 2025-05-23 01:21:39 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:21:39.548247 | orchestrator | 2025-05-23 01:21:39 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:21:42.596805 | orchestrator | 2025-05-23 01:21:42 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:21:42.596912 | orchestrator | 2025-05-23 01:21:42 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:21:45.647134 | orchestrator | 2025-05-23 01:21:45 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:21:45.647236 | orchestrator | 2025-05-23 01:21:45 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:21:48.702804 | orchestrator | 2025-05-23 01:21:48 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:21:48.702903 | orchestrator | 2025-05-23 01:21:48 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:21:51.755040 | orchestrator | 2025-05-23 01:21:51 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:21:51.755137 | orchestrator | 2025-05-23 01:21:51 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:21:54.807116 | orchestrator | 2025-05-23 01:21:54 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:21:54.807230 | orchestrator | 2025-05-23 01:21:54 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:21:57.860640 | orchestrator | 2025-05-23 01:21:57 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:21:57.860813 | orchestrator | 2025-05-23 01:21:57 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:22:00.907237 | orchestrator | 2025-05-23 01:22:00 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:22:00.907344 | orchestrator | 2025-05-23 01:22:00 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:22:03.954004 | orchestrator | 2025-05-23 01:22:03 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:22:03.954142 | orchestrator | 2025-05-23 01:22:03 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:22:07.005160 | orchestrator | 2025-05-23 01:22:07 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:22:07.005262 | orchestrator | 2025-05-23 01:22:07 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:22:10.051132 | orchestrator | 2025-05-23 01:22:10 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:22:10.051311 | orchestrator | 2025-05-23 01:22:10 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:22:13.098466 | orchestrator | 2025-05-23 01:22:13 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:22:13.098595 | orchestrator | 2025-05-23 01:22:13 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:22:16.141631 | orchestrator | 2025-05-23 01:22:16 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:22:16.141779 | orchestrator | 2025-05-23 01:22:16 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:22:19.200549 | orchestrator | 2025-05-23 01:22:19 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:22:19.200653 | orchestrator | 2025-05-23 01:22:19 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:22:22.255565 | orchestrator | 2025-05-23 01:22:22 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:22:22.255671 | orchestrator | 2025-05-23 01:22:22 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:22:25.308458 | orchestrator | 2025-05-23 01:22:25 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:22:25.308538 | orchestrator | 2025-05-23 01:22:25 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:22:28.366768 | orchestrator | 2025-05-23 01:22:28 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:22:28.366882 | orchestrator | 2025-05-23 01:22:28 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:22:31.418299 | orchestrator | 2025-05-23 01:22:31 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:22:31.418396 | orchestrator | 2025-05-23 01:22:31 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:22:34.476384 | orchestrator | 2025-05-23 01:22:34 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:22:34.476484 | orchestrator | 2025-05-23 01:22:34 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:22:37.537411 | orchestrator | 2025-05-23 01:22:37 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:22:37.538342 | orchestrator | 2025-05-23 01:22:37 | INFO  | Task 37938cd7-d73f-485a-893d-489e5ef4ce4d is in state STARTED 2025-05-23 01:22:37.538892 | orchestrator | 2025-05-23 01:22:37 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:22:40.597869 | orchestrator | 2025-05-23 01:22:40 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:22:40.598987 | orchestrator | 2025-05-23 01:22:40 | INFO  | Task 37938cd7-d73f-485a-893d-489e5ef4ce4d is in state STARTED 2025-05-23 01:22:40.599135 | orchestrator | 2025-05-23 01:22:40 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:22:43.662487 | orchestrator | 2025-05-23 01:22:43 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:22:43.663841 | orchestrator | 2025-05-23 01:22:43 | INFO  | Task 37938cd7-d73f-485a-893d-489e5ef4ce4d is in state STARTED 2025-05-23 01:22:43.663876 | orchestrator | 2025-05-23 01:22:43 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:22:46.719836 | orchestrator | 2025-05-23 01:22:46 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:22:46.722733 | orchestrator | 2025-05-23 01:22:46 | INFO  | Task 37938cd7-d73f-485a-893d-489e5ef4ce4d is in state STARTED 2025-05-23 01:22:46.722784 | orchestrator | 2025-05-23 01:22:46 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:22:49.764080 | orchestrator | 2025-05-23 01:22:49 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:22:49.764590 | orchestrator | 2025-05-23 01:22:49 | INFO  | Task 37938cd7-d73f-485a-893d-489e5ef4ce4d is in state SUCCESS 2025-05-23 01:22:49.764633 | orchestrator | 2025-05-23 01:22:49 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:22:52.825425 | orchestrator | 2025-05-23 01:22:52 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:22:52.825527 | orchestrator | 2025-05-23 01:22:52 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:22:55.873457 | orchestrator | 2025-05-23 01:22:55 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:22:55.873554 | orchestrator | 2025-05-23 01:22:55 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:22:58.922763 | orchestrator | 2025-05-23 01:22:58 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:22:58.922848 | orchestrator | 2025-05-23 01:22:58 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:23:01.968469 | orchestrator | 2025-05-23 01:23:01 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:23:01.968546 | orchestrator | 2025-05-23 01:23:01 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:23:05.020169 | orchestrator | 2025-05-23 01:23:05 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:23:05.020274 | orchestrator | 2025-05-23 01:23:05 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:23:08.074392 | orchestrator | 2025-05-23 01:23:08 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:23:08.074468 | orchestrator | 2025-05-23 01:23:08 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:23:11.127264 | orchestrator | 2025-05-23 01:23:11 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:23:11.127366 | orchestrator | 2025-05-23 01:23:11 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:23:14.177953 | orchestrator | 2025-05-23 01:23:14 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:23:14.178142 | orchestrator | 2025-05-23 01:23:14 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:23:17.230336 | orchestrator | 2025-05-23 01:23:17 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:23:17.230438 | orchestrator | 2025-05-23 01:23:17 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:23:20.281471 | orchestrator | 2025-05-23 01:23:20 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:23:20.281582 | orchestrator | 2025-05-23 01:23:20 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:23:23.329624 | orchestrator | 2025-05-23 01:23:23 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:23:23.329743 | orchestrator | 2025-05-23 01:23:23 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:23:26.378856 | orchestrator | 2025-05-23 01:23:26 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:23:26.378967 | orchestrator | 2025-05-23 01:23:26 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:23:29.420334 | orchestrator | 2025-05-23 01:23:29 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:23:29.420424 | orchestrator | 2025-05-23 01:23:29 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:23:32.474521 | orchestrator | 2025-05-23 01:23:32 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:23:32.474627 | orchestrator | 2025-05-23 01:23:32 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:23:35.525222 | orchestrator | 2025-05-23 01:23:35 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:23:35.525351 | orchestrator | 2025-05-23 01:23:35 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:23:38.578858 | orchestrator | 2025-05-23 01:23:38 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:23:38.578979 | orchestrator | 2025-05-23 01:23:38 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:23:41.633014 | orchestrator | 2025-05-23 01:23:41 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:23:41.633109 | orchestrator | 2025-05-23 01:23:41 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:23:44.688983 | orchestrator | 2025-05-23 01:23:44 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:23:44.689099 | orchestrator | 2025-05-23 01:23:44 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:23:47.741336 | orchestrator | 2025-05-23 01:23:47 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:23:47.741459 | orchestrator | 2025-05-23 01:23:47 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:23:50.796849 | orchestrator | 2025-05-23 01:23:50 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:23:50.796959 | orchestrator | 2025-05-23 01:23:50 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:23:53.843195 | orchestrator | 2025-05-23 01:23:53 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:23:53.843300 | orchestrator | 2025-05-23 01:23:53 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:23:56.894128 | orchestrator | 2025-05-23 01:23:56 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:23:56.894234 | orchestrator | 2025-05-23 01:23:56 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:23:59.941528 | orchestrator | 2025-05-23 01:23:59 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:23:59.941634 | orchestrator | 2025-05-23 01:23:59 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:24:02.991157 | orchestrator | 2025-05-23 01:24:02 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:24:02.991267 | orchestrator | 2025-05-23 01:24:02 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:24:06.039456 | orchestrator | 2025-05-23 01:24:06 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:24:06.039557 | orchestrator | 2025-05-23 01:24:06 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:24:09.088693 | orchestrator | 2025-05-23 01:24:09 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:24:09.088882 | orchestrator | 2025-05-23 01:24:09 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:24:12.143699 | orchestrator | 2025-05-23 01:24:12 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:24:12.143869 | orchestrator | 2025-05-23 01:24:12 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:24:15.196070 | orchestrator | 2025-05-23 01:24:15 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:24:15.196196 | orchestrator | 2025-05-23 01:24:15 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:24:18.242408 | orchestrator | 2025-05-23 01:24:18 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:24:18.242511 | orchestrator | 2025-05-23 01:24:18 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:24:21.290915 | orchestrator | 2025-05-23 01:24:21 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:24:21.291057 | orchestrator | 2025-05-23 01:24:21 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:24:24.343137 | orchestrator | 2025-05-23 01:24:24 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:24:24.343237 | orchestrator | 2025-05-23 01:24:24 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:24:27.389873 | orchestrator | 2025-05-23 01:24:27 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:24:27.389983 | orchestrator | 2025-05-23 01:24:27 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:24:30.441378 | orchestrator | 2025-05-23 01:24:30 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:24:30.441486 | orchestrator | 2025-05-23 01:24:30 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:24:33.488536 | orchestrator | 2025-05-23 01:24:33 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:24:33.488644 | orchestrator | 2025-05-23 01:24:33 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:24:36.538132 | orchestrator | 2025-05-23 01:24:36 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:24:36.538235 | orchestrator | 2025-05-23 01:24:36 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:24:39.587177 | orchestrator | 2025-05-23 01:24:39 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:24:39.587290 | orchestrator | 2025-05-23 01:24:39 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:24:42.637240 | orchestrator | 2025-05-23 01:24:42 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:24:42.637347 | orchestrator | 2025-05-23 01:24:42 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:24:45.687272 | orchestrator | 2025-05-23 01:24:45 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:24:45.687393 | orchestrator | 2025-05-23 01:24:45 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:24:48.741072 | orchestrator | 2025-05-23 01:24:48 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:24:48.741177 | orchestrator | 2025-05-23 01:24:48 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:24:51.796428 | orchestrator | 2025-05-23 01:24:51 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:24:51.796539 | orchestrator | 2025-05-23 01:24:51 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:24:54.851642 | orchestrator | 2025-05-23 01:24:54 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:24:54.851744 | orchestrator | 2025-05-23 01:24:54 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:24:57.902300 | orchestrator | 2025-05-23 01:24:57 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:24:57.902409 | orchestrator | 2025-05-23 01:24:57 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:25:00.953673 | orchestrator | 2025-05-23 01:25:00 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:25:00.953820 | orchestrator | 2025-05-23 01:25:00 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:25:04.002312 | orchestrator | 2025-05-23 01:25:04 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:25:04.002421 | orchestrator | 2025-05-23 01:25:04 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:25:07.061086 | orchestrator | 2025-05-23 01:25:07 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:25:07.061217 | orchestrator | 2025-05-23 01:25:07 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:25:10.110838 | orchestrator | 2025-05-23 01:25:10 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:25:10.110944 | orchestrator | 2025-05-23 01:25:10 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:25:13.149651 | orchestrator | 2025-05-23 01:25:13 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:25:13.149764 | orchestrator | 2025-05-23 01:25:13 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:25:16.203230 | orchestrator | 2025-05-23 01:25:16 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:25:16.203337 | orchestrator | 2025-05-23 01:25:16 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:25:19.252604 | orchestrator | 2025-05-23 01:25:19 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:25:19.252707 | orchestrator | 2025-05-23 01:25:19 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:25:22.307573 | orchestrator | 2025-05-23 01:25:22 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:25:22.307680 | orchestrator | 2025-05-23 01:25:22 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:25:25.363399 | orchestrator | 2025-05-23 01:25:25 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:25:25.363504 | orchestrator | 2025-05-23 01:25:25 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:25:28.415774 | orchestrator | 2025-05-23 01:25:28 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:25:28.415924 | orchestrator | 2025-05-23 01:25:28 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:25:31.475406 | orchestrator | 2025-05-23 01:25:31 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:25:31.475492 | orchestrator | 2025-05-23 01:25:31 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:25:34.522304 | orchestrator | 2025-05-23 01:25:34 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:25:34.522439 | orchestrator | 2025-05-23 01:25:34 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:25:37.573729 | orchestrator | 2025-05-23 01:25:37 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:25:37.573888 | orchestrator | 2025-05-23 01:25:37 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:25:40.627432 | orchestrator | 2025-05-23 01:25:40 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:25:40.628382 | orchestrator | 2025-05-23 01:25:40 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:25:43.673293 | orchestrator | 2025-05-23 01:25:43 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:25:43.673420 | orchestrator | 2025-05-23 01:25:43 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:25:46.725398 | orchestrator | 2025-05-23 01:25:46 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:25:46.725583 | orchestrator | 2025-05-23 01:25:46 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:25:49.778769 | orchestrator | 2025-05-23 01:25:49 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:25:49.778957 | orchestrator | 2025-05-23 01:25:49 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:25:52.830183 | orchestrator | 2025-05-23 01:25:52 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:25:52.830321 | orchestrator | 2025-05-23 01:25:52 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:25:55.882380 | orchestrator | 2025-05-23 01:25:55 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:25:55.882481 | orchestrator | 2025-05-23 01:25:55 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:25:58.931997 | orchestrator | 2025-05-23 01:25:58 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:25:58.932106 | orchestrator | 2025-05-23 01:25:58 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:26:01.984649 | orchestrator | 2025-05-23 01:26:01 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:26:01.984751 | orchestrator | 2025-05-23 01:26:01 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:26:05.039666 | orchestrator | 2025-05-23 01:26:05 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:26:05.039768 | orchestrator | 2025-05-23 01:26:05 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:26:08.087363 | orchestrator | 2025-05-23 01:26:08 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:26:08.087443 | orchestrator | 2025-05-23 01:26:08 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:26:11.133976 | orchestrator | 2025-05-23 01:26:11 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:26:11.134139 | orchestrator | 2025-05-23 01:26:11 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:26:14.185872 | orchestrator | 2025-05-23 01:26:14 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:26:14.185973 | orchestrator | 2025-05-23 01:26:14 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:26:17.234615 | orchestrator | 2025-05-23 01:26:17 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:26:17.234736 | orchestrator | 2025-05-23 01:26:17 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:26:20.285534 | orchestrator | 2025-05-23 01:26:20 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:26:20.285645 | orchestrator | 2025-05-23 01:26:20 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:26:23.342192 | orchestrator | 2025-05-23 01:26:23 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:26:23.342316 | orchestrator | 2025-05-23 01:26:23 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:26:26.389430 | orchestrator | 2025-05-23 01:26:26 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:26:26.389540 | orchestrator | 2025-05-23 01:26:26 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:26:29.433107 | orchestrator | 2025-05-23 01:26:29 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:26:29.433232 | orchestrator | 2025-05-23 01:26:29 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:26:32.486456 | orchestrator | 2025-05-23 01:26:32 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:26:32.486573 | orchestrator | 2025-05-23 01:26:32 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:26:35.540728 | orchestrator | 2025-05-23 01:26:35 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:26:35.540821 | orchestrator | 2025-05-23 01:26:35 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:26:38.596377 | orchestrator | 2025-05-23 01:26:38 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:26:38.596500 | orchestrator | 2025-05-23 01:26:38 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:26:41.643506 | orchestrator | 2025-05-23 01:26:41 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:26:41.643607 | orchestrator | 2025-05-23 01:26:41 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:26:44.693641 | orchestrator | 2025-05-23 01:26:44 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:26:44.694475 | orchestrator | 2025-05-23 01:26:44 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:26:47.745679 | orchestrator | 2025-05-23 01:26:47 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:26:47.745782 | orchestrator | 2025-05-23 01:26:47 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:26:50.797128 | orchestrator | 2025-05-23 01:26:50 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:26:50.797260 | orchestrator | 2025-05-23 01:26:50 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:26:53.853601 | orchestrator | 2025-05-23 01:26:53 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:26:53.853672 | orchestrator | 2025-05-23 01:26:53 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:26:56.906620 | orchestrator | 2025-05-23 01:26:56 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:26:56.906726 | orchestrator | 2025-05-23 01:26:56 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:26:59.942944 | orchestrator | 2025-05-23 01:26:59 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:26:59.943033 | orchestrator | 2025-05-23 01:26:59 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:27:02.999381 | orchestrator | 2025-05-23 01:27:02 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:27:02.999457 | orchestrator | 2025-05-23 01:27:02 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:27:06.047485 | orchestrator | 2025-05-23 01:27:06 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:27:06.047593 | orchestrator | 2025-05-23 01:27:06 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:27:09.104948 | orchestrator | 2025-05-23 01:27:09 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:27:09.105053 | orchestrator | 2025-05-23 01:27:09 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:27:12.155584 | orchestrator | 2025-05-23 01:27:12 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:27:12.155685 | orchestrator | 2025-05-23 01:27:12 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:27:15.203658 | orchestrator | 2025-05-23 01:27:15 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:27:15.203768 | orchestrator | 2025-05-23 01:27:15 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:27:18.259216 | orchestrator | 2025-05-23 01:27:18 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:27:18.259329 | orchestrator | 2025-05-23 01:27:18 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:27:21.313384 | orchestrator | 2025-05-23 01:27:21 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:27:21.313491 | orchestrator | 2025-05-23 01:27:21 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:27:24.367608 | orchestrator | 2025-05-23 01:27:24 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:27:24.367745 | orchestrator | 2025-05-23 01:27:24 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:27:27.422741 | orchestrator | 2025-05-23 01:27:27 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:27:27.422902 | orchestrator | 2025-05-23 01:27:27 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:27:30.468532 | orchestrator | 2025-05-23 01:27:30 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:27:30.468635 | orchestrator | 2025-05-23 01:27:30 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:27:33.518233 | orchestrator | 2025-05-23 01:27:33 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:27:33.518341 | orchestrator | 2025-05-23 01:27:33 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:27:36.564107 | orchestrator | 2025-05-23 01:27:36 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:27:36.564215 | orchestrator | 2025-05-23 01:27:36 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:27:39.618121 | orchestrator | 2025-05-23 01:27:39 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:27:39.618223 | orchestrator | 2025-05-23 01:27:39 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:27:42.666070 | orchestrator | 2025-05-23 01:27:42 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:27:42.666188 | orchestrator | 2025-05-23 01:27:42 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:27:45.713091 | orchestrator | 2025-05-23 01:27:45 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:27:45.713188 | orchestrator | 2025-05-23 01:27:45 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:27:48.763564 | orchestrator | 2025-05-23 01:27:48 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:27:48.763661 | orchestrator | 2025-05-23 01:27:48 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:27:51.820275 | orchestrator | 2025-05-23 01:27:51 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:27:51.820379 | orchestrator | 2025-05-23 01:27:51 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:27:54.875900 | orchestrator | 2025-05-23 01:27:54 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:27:54.876020 | orchestrator | 2025-05-23 01:27:54 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:27:57.928691 | orchestrator | 2025-05-23 01:27:57 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:27:57.928794 | orchestrator | 2025-05-23 01:27:57 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:28:00.977165 | orchestrator | 2025-05-23 01:28:00 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:28:00.977257 | orchestrator | 2025-05-23 01:28:00 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:28:04.030262 | orchestrator | 2025-05-23 01:28:04 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:28:04.030349 | orchestrator | 2025-05-23 01:28:04 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:28:07.085726 | orchestrator | 2025-05-23 01:28:07 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:28:07.085839 | orchestrator | 2025-05-23 01:28:07 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:28:10.130574 | orchestrator | 2025-05-23 01:28:10 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:28:10.130702 | orchestrator | 2025-05-23 01:28:10 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:28:13.181120 | orchestrator | 2025-05-23 01:28:13 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:28:13.181251 | orchestrator | 2025-05-23 01:28:13 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:28:16.236939 | orchestrator | 2025-05-23 01:28:16 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:28:16.237044 | orchestrator | 2025-05-23 01:28:16 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:28:19.289674 | orchestrator | 2025-05-23 01:28:19 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:28:19.289782 | orchestrator | 2025-05-23 01:28:19 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:28:22.341042 | orchestrator | 2025-05-23 01:28:22 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:28:22.341165 | orchestrator | 2025-05-23 01:28:22 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:28:25.392731 | orchestrator | 2025-05-23 01:28:25 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:28:25.392843 | orchestrator | 2025-05-23 01:28:25 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:28:28.445387 | orchestrator | 2025-05-23 01:28:28 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:28:28.445513 | orchestrator | 2025-05-23 01:28:28 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:28:31.489064 | orchestrator | 2025-05-23 01:28:31 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:28:31.489170 | orchestrator | 2025-05-23 01:28:31 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:28:34.539035 | orchestrator | 2025-05-23 01:28:34 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:28:34.539126 | orchestrator | 2025-05-23 01:28:34 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:28:37.586666 | orchestrator | 2025-05-23 01:28:37 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:28:37.586801 | orchestrator | 2025-05-23 01:28:37 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:28:40.630694 | orchestrator | 2025-05-23 01:28:40 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:28:40.630796 | orchestrator | 2025-05-23 01:28:40 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:28:43.682474 | orchestrator | 2025-05-23 01:28:43 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:28:43.682583 | orchestrator | 2025-05-23 01:28:43 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:28:46.734818 | orchestrator | 2025-05-23 01:28:46 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:28:46.735008 | orchestrator | 2025-05-23 01:28:46 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:28:49.785940 | orchestrator | 2025-05-23 01:28:49 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:28:49.786100 | orchestrator | 2025-05-23 01:28:49 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:28:52.842820 | orchestrator | 2025-05-23 01:28:52 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:28:52.843018 | orchestrator | 2025-05-23 01:28:52 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:28:55.890479 | orchestrator | 2025-05-23 01:28:55 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:28:55.890631 | orchestrator | 2025-05-23 01:28:55 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:28:58.937533 | orchestrator | 2025-05-23 01:28:58 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:28:58.937618 | orchestrator | 2025-05-23 01:28:58 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:29:01.990748 | orchestrator | 2025-05-23 01:29:01 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:29:01.990854 | orchestrator | 2025-05-23 01:29:01 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:29:05.049485 | orchestrator | 2025-05-23 01:29:05 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:29:05.049578 | orchestrator | 2025-05-23 01:29:05 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:29:08.099843 | orchestrator | 2025-05-23 01:29:08 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:29:08.099945 | orchestrator | 2025-05-23 01:29:08 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:29:11.148293 | orchestrator | 2025-05-23 01:29:11 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:29:11.148393 | orchestrator | 2025-05-23 01:29:11 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:29:14.204175 | orchestrator | 2025-05-23 01:29:14 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:29:14.204277 | orchestrator | 2025-05-23 01:29:14 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:29:17.255123 | orchestrator | 2025-05-23 01:29:17 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:29:17.255228 | orchestrator | 2025-05-23 01:29:17 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:29:20.304802 | orchestrator | 2025-05-23 01:29:20 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:29:20.304942 | orchestrator | 2025-05-23 01:29:20 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:29:23.353658 | orchestrator | 2025-05-23 01:29:23 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:29:23.353765 | orchestrator | 2025-05-23 01:29:23 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:29:26.402655 | orchestrator | 2025-05-23 01:29:26 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:29:26.402760 | orchestrator | 2025-05-23 01:29:26 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:29:29.444858 | orchestrator | 2025-05-23 01:29:29 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:29:29.445014 | orchestrator | 2025-05-23 01:29:29 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:29:32.490345 | orchestrator | 2025-05-23 01:29:32 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:29:32.490457 | orchestrator | 2025-05-23 01:29:32 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:29:35.543208 | orchestrator | 2025-05-23 01:29:35 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:29:35.543303 | orchestrator | 2025-05-23 01:29:35 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:29:38.588234 | orchestrator | 2025-05-23 01:29:38 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:29:38.588342 | orchestrator | 2025-05-23 01:29:38 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:29:41.640127 | orchestrator | 2025-05-23 01:29:41 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:29:41.640258 | orchestrator | 2025-05-23 01:29:41 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:29:44.698845 | orchestrator | 2025-05-23 01:29:44 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:29:44.698984 | orchestrator | 2025-05-23 01:29:44 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:29:47.752998 | orchestrator | 2025-05-23 01:29:47 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:29:47.753103 | orchestrator | 2025-05-23 01:29:47 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:29:50.806308 | orchestrator | 2025-05-23 01:29:50 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:29:50.806432 | orchestrator | 2025-05-23 01:29:50 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:29:53.857346 | orchestrator | 2025-05-23 01:29:53 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:29:53.857447 | orchestrator | 2025-05-23 01:29:53 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:29:56.908378 | orchestrator | 2025-05-23 01:29:56 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:29:56.908497 | orchestrator | 2025-05-23 01:29:56 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:29:59.963972 | orchestrator | 2025-05-23 01:29:59 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:29:59.964077 | orchestrator | 2025-05-23 01:29:59 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:30:03.017311 | orchestrator | 2025-05-23 01:30:03 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:30:03.017416 | orchestrator | 2025-05-23 01:30:03 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:30:06.066262 | orchestrator | 2025-05-23 01:30:06 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:30:06.066361 | orchestrator | 2025-05-23 01:30:06 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:30:09.117668 | orchestrator | 2025-05-23 01:30:09 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:30:09.117798 | orchestrator | 2025-05-23 01:30:09 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:30:12.170372 | orchestrator | 2025-05-23 01:30:12 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:30:12.170466 | orchestrator | 2025-05-23 01:30:12 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:30:15.219155 | orchestrator | 2025-05-23 01:30:15 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:30:15.219298 | orchestrator | 2025-05-23 01:30:15 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:30:18.259879 | orchestrator | 2025-05-23 01:30:18 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:30:18.260034 | orchestrator | 2025-05-23 01:30:18 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:30:21.314461 | orchestrator | 2025-05-23 01:30:21 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:30:21.314564 | orchestrator | 2025-05-23 01:30:21 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:30:24.367823 | orchestrator | 2025-05-23 01:30:24 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:30:24.367978 | orchestrator | 2025-05-23 01:30:24 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:30:27.413966 | orchestrator | 2025-05-23 01:30:27 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:30:27.414199 | orchestrator | 2025-05-23 01:30:27 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:30:30.463194 | orchestrator | 2025-05-23 01:30:30 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:30:30.463295 | orchestrator | 2025-05-23 01:30:30 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:30:33.508785 | orchestrator | 2025-05-23 01:30:33 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:30:33.508889 | orchestrator | 2025-05-23 01:30:33 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:30:36.561980 | orchestrator | 2025-05-23 01:30:36 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:30:36.562150 | orchestrator | 2025-05-23 01:30:36 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:30:39.616430 | orchestrator | 2025-05-23 01:30:39 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:30:39.616558 | orchestrator | 2025-05-23 01:30:39 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:30:42.672871 | orchestrator | 2025-05-23 01:30:42 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:30:42.673032 | orchestrator | 2025-05-23 01:30:42 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:30:45.723357 | orchestrator | 2025-05-23 01:30:45 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:30:45.723459 | orchestrator | 2025-05-23 01:30:45 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:30:48.767779 | orchestrator | 2025-05-23 01:30:48 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:30:48.767886 | orchestrator | 2025-05-23 01:30:48 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:30:51.821893 | orchestrator | 2025-05-23 01:30:51 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:30:51.822108 | orchestrator | 2025-05-23 01:30:51 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:30:54.873497 | orchestrator | 2025-05-23 01:30:54 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:30:54.873601 | orchestrator | 2025-05-23 01:30:54 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:30:57.925575 | orchestrator | 2025-05-23 01:30:57 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:30:57.925683 | orchestrator | 2025-05-23 01:30:57 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:31:00.977766 | orchestrator | 2025-05-23 01:31:00 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:31:00.977875 | orchestrator | 2025-05-23 01:31:00 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:31:04.029365 | orchestrator | 2025-05-23 01:31:04 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:31:04.029473 | orchestrator | 2025-05-23 01:31:04 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:31:07.079149 | orchestrator | 2025-05-23 01:31:07 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:31:07.079256 | orchestrator | 2025-05-23 01:31:07 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:31:10.131060 | orchestrator | 2025-05-23 01:31:10 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:31:10.131170 | orchestrator | 2025-05-23 01:31:10 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:31:13.185767 | orchestrator | 2025-05-23 01:31:13 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:31:13.185915 | orchestrator | 2025-05-23 01:31:13 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:31:16.239380 | orchestrator | 2025-05-23 01:31:16 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:31:16.239483 | orchestrator | 2025-05-23 01:31:16 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:31:19.290790 | orchestrator | 2025-05-23 01:31:19 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:31:19.290896 | orchestrator | 2025-05-23 01:31:19 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:31:22.357907 | orchestrator | 2025-05-23 01:31:22 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:31:22.358160 | orchestrator | 2025-05-23 01:31:22 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:31:25.413843 | orchestrator | 2025-05-23 01:31:25 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:31:25.414007 | orchestrator | 2025-05-23 01:31:25 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:31:28.474400 | orchestrator | 2025-05-23 01:31:28 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:31:28.474494 | orchestrator | 2025-05-23 01:31:28 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:31:31.526159 | orchestrator | 2025-05-23 01:31:31 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:31:31.526266 | orchestrator | 2025-05-23 01:31:31 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:31:34.580931 | orchestrator | 2025-05-23 01:31:34 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:31:34.581100 | orchestrator | 2025-05-23 01:31:34 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:31:37.630601 | orchestrator | 2025-05-23 01:31:37 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:31:37.630701 | orchestrator | 2025-05-23 01:31:37 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:31:40.680125 | orchestrator | 2025-05-23 01:31:40 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:31:40.680231 | orchestrator | 2025-05-23 01:31:40 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:31:43.734690 | orchestrator | 2025-05-23 01:31:43 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:31:43.734866 | orchestrator | 2025-05-23 01:31:43 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:31:46.788343 | orchestrator | 2025-05-23 01:31:46 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:31:46.788456 | orchestrator | 2025-05-23 01:31:46 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:31:49.847124 | orchestrator | 2025-05-23 01:31:49 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:31:49.847225 | orchestrator | 2025-05-23 01:31:49 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:31:52.894069 | orchestrator | 2025-05-23 01:31:52 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:31:52.894183 | orchestrator | 2025-05-23 01:31:52 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:31:55.939868 | orchestrator | 2025-05-23 01:31:55 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:31:55.940036 | orchestrator | 2025-05-23 01:31:55 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:31:58.991704 | orchestrator | 2025-05-23 01:31:58 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:31:58.991811 | orchestrator | 2025-05-23 01:31:58 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:32:02.047300 | orchestrator | 2025-05-23 01:32:02 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:32:02.047406 | orchestrator | 2025-05-23 01:32:02 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:32:05.105390 | orchestrator | 2025-05-23 01:32:05 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:32:05.105502 | orchestrator | 2025-05-23 01:32:05 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:32:08.155769 | orchestrator | 2025-05-23 01:32:08 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:32:08.155872 | orchestrator | 2025-05-23 01:32:08 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:32:11.203008 | orchestrator | 2025-05-23 01:32:11 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:32:11.203125 | orchestrator | 2025-05-23 01:32:11 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:32:14.251051 | orchestrator | 2025-05-23 01:32:14 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:32:14.251153 | orchestrator | 2025-05-23 01:32:14 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:32:17.302897 | orchestrator | 2025-05-23 01:32:17 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:32:17.303037 | orchestrator | 2025-05-23 01:32:17 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:32:20.354196 | orchestrator | 2025-05-23 01:32:20 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:32:20.354295 | orchestrator | 2025-05-23 01:32:20 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:32:23.402946 | orchestrator | 2025-05-23 01:32:23 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:32:23.403088 | orchestrator | 2025-05-23 01:32:23 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:32:26.457045 | orchestrator | 2025-05-23 01:32:26 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:32:26.457150 | orchestrator | 2025-05-23 01:32:26 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:32:29.507723 | orchestrator | 2025-05-23 01:32:29 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:32:29.507829 | orchestrator | 2025-05-23 01:32:29 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:32:32.556884 | orchestrator | 2025-05-23 01:32:32 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:32:32.557023 | orchestrator | 2025-05-23 01:32:32 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:32:35.606443 | orchestrator | 2025-05-23 01:32:35 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:32:35.606549 | orchestrator | 2025-05-23 01:32:35 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:32:38.658704 | orchestrator | 2025-05-23 01:32:38 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:32:38.659910 | orchestrator | 2025-05-23 01:32:38 | INFO  | Task 36383c4f-159f-4a81-9601-135015aa3977 is in state STARTED 2025-05-23 01:32:38.659945 | orchestrator | 2025-05-23 01:32:38 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:32:41.711489 | orchestrator | 2025-05-23 01:32:41 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:32:41.713023 | orchestrator | 2025-05-23 01:32:41 | INFO  | Task 36383c4f-159f-4a81-9601-135015aa3977 is in state STARTED 2025-05-23 01:32:41.713076 | orchestrator | 2025-05-23 01:32:41 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:32:44.765037 | orchestrator | 2025-05-23 01:32:44 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:32:44.767592 | orchestrator | 2025-05-23 01:32:44 | INFO  | Task 36383c4f-159f-4a81-9601-135015aa3977 is in state STARTED 2025-05-23 01:32:44.767681 | orchestrator | 2025-05-23 01:32:44 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:32:47.822396 | orchestrator | 2025-05-23 01:32:47 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:32:47.823222 | orchestrator | 2025-05-23 01:32:47 | INFO  | Task 36383c4f-159f-4a81-9601-135015aa3977 is in state SUCCESS 2025-05-23 01:32:47.823259 | orchestrator | 2025-05-23 01:32:47 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:32:50.875358 | orchestrator | 2025-05-23 01:32:50 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:32:50.875755 | orchestrator | 2025-05-23 01:32:50 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:32:53.917889 | orchestrator | 2025-05-23 01:32:53 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:32:53.918094 | orchestrator | 2025-05-23 01:32:53 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:32:56.970919 | orchestrator | 2025-05-23 01:32:56 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:32:56.971066 | orchestrator | 2025-05-23 01:32:56 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:33:00.020749 | orchestrator | 2025-05-23 01:33:00 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:33:00.020860 | orchestrator | 2025-05-23 01:33:00 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:33:03.069064 | orchestrator | 2025-05-23 01:33:03 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:33:03.069182 | orchestrator | 2025-05-23 01:33:03 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:33:06.120898 | orchestrator | 2025-05-23 01:33:06 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:33:06.121040 | orchestrator | 2025-05-23 01:33:06 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:33:09.172748 | orchestrator | 2025-05-23 01:33:09 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:33:09.172859 | orchestrator | 2025-05-23 01:33:09 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:33:12.219541 | orchestrator | 2025-05-23 01:33:12 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:33:12.219642 | orchestrator | 2025-05-23 01:33:12 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:33:15.268455 | orchestrator | 2025-05-23 01:33:15 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:33:15.268570 | orchestrator | 2025-05-23 01:33:15 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:33:18.316418 | orchestrator | 2025-05-23 01:33:18 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:33:18.316509 | orchestrator | 2025-05-23 01:33:18 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:33:21.368922 | orchestrator | 2025-05-23 01:33:21 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:33:21.369084 | orchestrator | 2025-05-23 01:33:21 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:33:24.422738 | orchestrator | 2025-05-23 01:33:24 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:33:24.422851 | orchestrator | 2025-05-23 01:33:24 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:33:27.472061 | orchestrator | 2025-05-23 01:33:27 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:33:27.472162 | orchestrator | 2025-05-23 01:33:27 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:33:30.511898 | orchestrator | 2025-05-23 01:33:30 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:33:30.512087 | orchestrator | 2025-05-23 01:33:30 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:33:33.558837 | orchestrator | 2025-05-23 01:33:33 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:33:33.559076 | orchestrator | 2025-05-23 01:33:33 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:33:36.611225 | orchestrator | 2025-05-23 01:33:36 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:33:36.611326 | orchestrator | 2025-05-23 01:33:36 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:33:39.659891 | orchestrator | 2025-05-23 01:33:39 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:33:39.660734 | orchestrator | 2025-05-23 01:33:39 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:33:42.710583 | orchestrator | 2025-05-23 01:33:42 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:33:42.710687 | orchestrator | 2025-05-23 01:33:42 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:33:45.761447 | orchestrator | 2025-05-23 01:33:45 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:33:45.761563 | orchestrator | 2025-05-23 01:33:45 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:33:48.808279 | orchestrator | 2025-05-23 01:33:48 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:33:48.808385 | orchestrator | 2025-05-23 01:33:48 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:33:51.856135 | orchestrator | 2025-05-23 01:33:51 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:33:51.856242 | orchestrator | 2025-05-23 01:33:51 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:33:54.903886 | orchestrator | 2025-05-23 01:33:54 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:33:54.904041 | orchestrator | 2025-05-23 01:33:54 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:33:57.952339 | orchestrator | 2025-05-23 01:33:57 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:33:57.952447 | orchestrator | 2025-05-23 01:33:57 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:34:00.997829 | orchestrator | 2025-05-23 01:34:00 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:34:00.997931 | orchestrator | 2025-05-23 01:34:00 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:34:04.042885 | orchestrator | 2025-05-23 01:34:04 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:34:04.043051 | orchestrator | 2025-05-23 01:34:04 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:34:07.093400 | orchestrator | 2025-05-23 01:34:07 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:34:07.093490 | orchestrator | 2025-05-23 01:34:07 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:34:10.140768 | orchestrator | 2025-05-23 01:34:10 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:34:10.140875 | orchestrator | 2025-05-23 01:34:10 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:34:13.178798 | orchestrator | 2025-05-23 01:34:13 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:34:13.178925 | orchestrator | 2025-05-23 01:34:13 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:34:16.231343 | orchestrator | 2025-05-23 01:34:16 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:34:16.231471 | orchestrator | 2025-05-23 01:34:16 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:34:19.281256 | orchestrator | 2025-05-23 01:34:19 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:34:19.281359 | orchestrator | 2025-05-23 01:34:19 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:34:22.329260 | orchestrator | 2025-05-23 01:34:22 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:34:22.329375 | orchestrator | 2025-05-23 01:34:22 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:34:25.378758 | orchestrator | 2025-05-23 01:34:25 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:34:25.378860 | orchestrator | 2025-05-23 01:34:25 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:34:28.427541 | orchestrator | 2025-05-23 01:34:28 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:34:28.427647 | orchestrator | 2025-05-23 01:34:28 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:34:31.477266 | orchestrator | 2025-05-23 01:34:31 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:34:31.477360 | orchestrator | 2025-05-23 01:34:31 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:34:34.528444 | orchestrator | 2025-05-23 01:34:34 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:34:34.528563 | orchestrator | 2025-05-23 01:34:34 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:34:37.576669 | orchestrator | 2025-05-23 01:34:37 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:34:37.576769 | orchestrator | 2025-05-23 01:34:37 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:34:40.620165 | orchestrator | 2025-05-23 01:34:40 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:34:40.620959 | orchestrator | 2025-05-23 01:34:40 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:34:43.669064 | orchestrator | 2025-05-23 01:34:43 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:34:43.669173 | orchestrator | 2025-05-23 01:34:43 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:34:46.722281 | orchestrator | 2025-05-23 01:34:46 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:34:46.722392 | orchestrator | 2025-05-23 01:34:46 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:34:49.772606 | orchestrator | 2025-05-23 01:34:49 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:34:49.772711 | orchestrator | 2025-05-23 01:34:49 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:34:52.825092 | orchestrator | 2025-05-23 01:34:52 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:34:52.825226 | orchestrator | 2025-05-23 01:34:52 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:34:55.870444 | orchestrator | 2025-05-23 01:34:55 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:34:55.870552 | orchestrator | 2025-05-23 01:34:55 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:34:58.915640 | orchestrator | 2025-05-23 01:34:58 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:34:58.915745 | orchestrator | 2025-05-23 01:34:58 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:35:01.963068 | orchestrator | 2025-05-23 01:35:01 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:35:01.963171 | orchestrator | 2025-05-23 01:35:01 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:35:05.018901 | orchestrator | 2025-05-23 01:35:05 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:35:05.019050 | orchestrator | 2025-05-23 01:35:05 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:35:08.064159 | orchestrator | 2025-05-23 01:35:08 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:35:08.064263 | orchestrator | 2025-05-23 01:35:08 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:35:11.104944 | orchestrator | 2025-05-23 01:35:11 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:35:11.105111 | orchestrator | 2025-05-23 01:35:11 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:35:14.154678 | orchestrator | 2025-05-23 01:35:14 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:35:14.154798 | orchestrator | 2025-05-23 01:35:14 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:35:17.212823 | orchestrator | 2025-05-23 01:35:17 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:35:17.212936 | orchestrator | 2025-05-23 01:35:17 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:35:20.265195 | orchestrator | 2025-05-23 01:35:20 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:35:20.265298 | orchestrator | 2025-05-23 01:35:20 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:35:23.311605 | orchestrator | 2025-05-23 01:35:23 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:35:23.311714 | orchestrator | 2025-05-23 01:35:23 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:35:26.360676 | orchestrator | 2025-05-23 01:35:26 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:35:26.360781 | orchestrator | 2025-05-23 01:35:26 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:35:29.407748 | orchestrator | 2025-05-23 01:35:29 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:35:29.407856 | orchestrator | 2025-05-23 01:35:29 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:35:32.458813 | orchestrator | 2025-05-23 01:35:32 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:35:32.458913 | orchestrator | 2025-05-23 01:35:32 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:35:35.512808 | orchestrator | 2025-05-23 01:35:35 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:35:35.512920 | orchestrator | 2025-05-23 01:35:35 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:35:38.568394 | orchestrator | 2025-05-23 01:35:38 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:35:38.568535 | orchestrator | 2025-05-23 01:35:38 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:35:41.622510 | orchestrator | 2025-05-23 01:35:41 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:35:41.622617 | orchestrator | 2025-05-23 01:35:41 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:35:44.673065 | orchestrator | 2025-05-23 01:35:44 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:35:44.673166 | orchestrator | 2025-05-23 01:35:44 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:35:47.725971 | orchestrator | 2025-05-23 01:35:47 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:35:47.726176 | orchestrator | 2025-05-23 01:35:47 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:35:50.780632 | orchestrator | 2025-05-23 01:35:50 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:35:50.780727 | orchestrator | 2025-05-23 01:35:50 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:35:53.832961 | orchestrator | 2025-05-23 01:35:53 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:35:53.833120 | orchestrator | 2025-05-23 01:35:53 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:35:56.883285 | orchestrator | 2025-05-23 01:35:56 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:35:56.883391 | orchestrator | 2025-05-23 01:35:56 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:35:59.931299 | orchestrator | 2025-05-23 01:35:59 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:35:59.931422 | orchestrator | 2025-05-23 01:35:59 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:36:02.979444 | orchestrator | 2025-05-23 01:36:02 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:36:02.979552 | orchestrator | 2025-05-23 01:36:02 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:36:06.038552 | orchestrator | 2025-05-23 01:36:06 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:36:06.038653 | orchestrator | 2025-05-23 01:36:06 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:36:09.084847 | orchestrator | 2025-05-23 01:36:09 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:36:09.084954 | orchestrator | 2025-05-23 01:36:09 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:36:12.133360 | orchestrator | 2025-05-23 01:36:12 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:36:12.133487 | orchestrator | 2025-05-23 01:36:12 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:36:15.194092 | orchestrator | 2025-05-23 01:36:15 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:36:15.194197 | orchestrator | 2025-05-23 01:36:15 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:36:18.248169 | orchestrator | 2025-05-23 01:36:18 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:36:18.248279 | orchestrator | 2025-05-23 01:36:18 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:36:21.300325 | orchestrator | 2025-05-23 01:36:21 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:36:21.300424 | orchestrator | 2025-05-23 01:36:21 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:36:24.374856 | orchestrator | 2025-05-23 01:36:24 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:36:24.374968 | orchestrator | 2025-05-23 01:36:24 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:36:27.428805 | orchestrator | 2025-05-23 01:36:27 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:36:27.428913 | orchestrator | 2025-05-23 01:36:27 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:36:30.476579 | orchestrator | 2025-05-23 01:36:30 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:36:30.476660 | orchestrator | 2025-05-23 01:36:30 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:36:33.523773 | orchestrator | 2025-05-23 01:36:33 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:36:33.523862 | orchestrator | 2025-05-23 01:36:33 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:36:36.569252 | orchestrator | 2025-05-23 01:36:36 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:36:36.569359 | orchestrator | 2025-05-23 01:36:36 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:36:39.621705 | orchestrator | 2025-05-23 01:36:39 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:36:39.621809 | orchestrator | 2025-05-23 01:36:39 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:36:42.675803 | orchestrator | 2025-05-23 01:36:42 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:36:42.676632 | orchestrator | 2025-05-23 01:36:42 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:36:45.724447 | orchestrator | 2025-05-23 01:36:45 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:36:45.724551 | orchestrator | 2025-05-23 01:36:45 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:36:48.774669 | orchestrator | 2025-05-23 01:36:48 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:36:48.774780 | orchestrator | 2025-05-23 01:36:48 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:36:51.830233 | orchestrator | 2025-05-23 01:36:51 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:36:51.830327 | orchestrator | 2025-05-23 01:36:51 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:36:54.884939 | orchestrator | 2025-05-23 01:36:54 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:36:54.885146 | orchestrator | 2025-05-23 01:36:54 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:36:57.931898 | orchestrator | 2025-05-23 01:36:57 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:36:57.932096 | orchestrator | 2025-05-23 01:36:57 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:37:00.986499 | orchestrator | 2025-05-23 01:37:00 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:37:00.986623 | orchestrator | 2025-05-23 01:37:00 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:37:04.035225 | orchestrator | 2025-05-23 01:37:04 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:37:04.035314 | orchestrator | 2025-05-23 01:37:04 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:37:07.086337 | orchestrator | 2025-05-23 01:37:07 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:37:07.086448 | orchestrator | 2025-05-23 01:37:07 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:37:10.138452 | orchestrator | 2025-05-23 01:37:10 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:37:10.138603 | orchestrator | 2025-05-23 01:37:10 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:37:13.180789 | orchestrator | 2025-05-23 01:37:13 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:37:13.180894 | orchestrator | 2025-05-23 01:37:13 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:37:16.236839 | orchestrator | 2025-05-23 01:37:16 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:37:16.236952 | orchestrator | 2025-05-23 01:37:16 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:37:19.284930 | orchestrator | 2025-05-23 01:37:19 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:37:19.285095 | orchestrator | 2025-05-23 01:37:19 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:37:22.331241 | orchestrator | 2025-05-23 01:37:22 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:37:22.331364 | orchestrator | 2025-05-23 01:37:22 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:37:25.388491 | orchestrator | 2025-05-23 01:37:25 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:37:25.388581 | orchestrator | 2025-05-23 01:37:25 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:37:28.443217 | orchestrator | 2025-05-23 01:37:28 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:37:28.443332 | orchestrator | 2025-05-23 01:37:28 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:37:31.501215 | orchestrator | 2025-05-23 01:37:31 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:37:31.501607 | orchestrator | 2025-05-23 01:37:31 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:37:34.551797 | orchestrator | 2025-05-23 01:37:34 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:37:34.551895 | orchestrator | 2025-05-23 01:37:34 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:37:37.601817 | orchestrator | 2025-05-23 01:37:37 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:37:37.601921 | orchestrator | 2025-05-23 01:37:37 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:37:40.650944 | orchestrator | 2025-05-23 01:37:40 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:37:40.651076 | orchestrator | 2025-05-23 01:37:40 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:37:43.700241 | orchestrator | 2025-05-23 01:37:43 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:37:43.700352 | orchestrator | 2025-05-23 01:37:43 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:37:46.751615 | orchestrator | 2025-05-23 01:37:46 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:37:46.751781 | orchestrator | 2025-05-23 01:37:46 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:37:49.801260 | orchestrator | 2025-05-23 01:37:49 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:37:49.801363 | orchestrator | 2025-05-23 01:37:49 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:37:52.849255 | orchestrator | 2025-05-23 01:37:52 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:37:52.849358 | orchestrator | 2025-05-23 01:37:52 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:37:55.890370 | orchestrator | 2025-05-23 01:37:55 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:37:55.890508 | orchestrator | 2025-05-23 01:37:55 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:37:58.939783 | orchestrator | 2025-05-23 01:37:58 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:37:58.939885 | orchestrator | 2025-05-23 01:37:58 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:38:01.993003 | orchestrator | 2025-05-23 01:38:01 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:38:01.993157 | orchestrator | 2025-05-23 01:38:01 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:38:05.048249 | orchestrator | 2025-05-23 01:38:05 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:38:05.048374 | orchestrator | 2025-05-23 01:38:05 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:38:08.103655 | orchestrator | 2025-05-23 01:38:08 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:38:08.103775 | orchestrator | 2025-05-23 01:38:08 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:38:11.150720 | orchestrator | 2025-05-23 01:38:11 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:38:11.150795 | orchestrator | 2025-05-23 01:38:11 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:38:14.200152 | orchestrator | 2025-05-23 01:38:14 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:38:14.200269 | orchestrator | 2025-05-23 01:38:14 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:38:17.254362 | orchestrator | 2025-05-23 01:38:17 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:38:17.254489 | orchestrator | 2025-05-23 01:38:17 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:38:20.303583 | orchestrator | 2025-05-23 01:38:20 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:38:20.303687 | orchestrator | 2025-05-23 01:38:20 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:38:23.357455 | orchestrator | 2025-05-23 01:38:23 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:38:23.357562 | orchestrator | 2025-05-23 01:38:23 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:38:26.404833 | orchestrator | 2025-05-23 01:38:26 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:38:26.404935 | orchestrator | 2025-05-23 01:38:26 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:38:29.455858 | orchestrator | 2025-05-23 01:38:29 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:38:29.455961 | orchestrator | 2025-05-23 01:38:29 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:38:32.514737 | orchestrator | 2025-05-23 01:38:32 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:38:32.514842 | orchestrator | 2025-05-23 01:38:32 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:38:35.568941 | orchestrator | 2025-05-23 01:38:35 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:38:35.569107 | orchestrator | 2025-05-23 01:38:35 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:38:38.619854 | orchestrator | 2025-05-23 01:38:38 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:38:38.619954 | orchestrator | 2025-05-23 01:38:38 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:38:41.669723 | orchestrator | 2025-05-23 01:38:41 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:38:41.669865 | orchestrator | 2025-05-23 01:38:41 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:38:44.713368 | orchestrator | 2025-05-23 01:38:44 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:38:44.713491 | orchestrator | 2025-05-23 01:38:44 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:38:47.767535 | orchestrator | 2025-05-23 01:38:47 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:38:47.767643 | orchestrator | 2025-05-23 01:38:47 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:38:50.824176 | orchestrator | 2025-05-23 01:38:50 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:38:50.824281 | orchestrator | 2025-05-23 01:38:50 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:38:53.876966 | orchestrator | 2025-05-23 01:38:53 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:38:53.877119 | orchestrator | 2025-05-23 01:38:53 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:38:56.931584 | orchestrator | 2025-05-23 01:38:56 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:38:56.931683 | orchestrator | 2025-05-23 01:38:56 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:38:59.983665 | orchestrator | 2025-05-23 01:38:59 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:38:59.983772 | orchestrator | 2025-05-23 01:38:59 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:39:03.042921 | orchestrator | 2025-05-23 01:39:03 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:39:03.043028 | orchestrator | 2025-05-23 01:39:03 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:39:06.109701 | orchestrator | 2025-05-23 01:39:06 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:39:06.109812 | orchestrator | 2025-05-23 01:39:06 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:39:09.167281 | orchestrator | 2025-05-23 01:39:09 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:39:09.167381 | orchestrator | 2025-05-23 01:39:09 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:39:12.227003 | orchestrator | 2025-05-23 01:39:12 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:39:12.227149 | orchestrator | 2025-05-23 01:39:12 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:39:15.278001 | orchestrator | 2025-05-23 01:39:15 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:39:15.278208 | orchestrator | 2025-05-23 01:39:15 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:39:18.334158 | orchestrator | 2025-05-23 01:39:18 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:39:18.334257 | orchestrator | 2025-05-23 01:39:18 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:39:21.382617 | orchestrator | 2025-05-23 01:39:21 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:39:21.382735 | orchestrator | 2025-05-23 01:39:21 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:39:24.436548 | orchestrator | 2025-05-23 01:39:24 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:39:24.436646 | orchestrator | 2025-05-23 01:39:24 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:39:27.491159 | orchestrator | 2025-05-23 01:39:27 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:39:27.491289 | orchestrator | 2025-05-23 01:39:27 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:39:30.537215 | orchestrator | 2025-05-23 01:39:30 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:39:30.537314 | orchestrator | 2025-05-23 01:39:30 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:39:33.598335 | orchestrator | 2025-05-23 01:39:33 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:39:33.598449 | orchestrator | 2025-05-23 01:39:33 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:39:36.651658 | orchestrator | 2025-05-23 01:39:36 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:39:36.651762 | orchestrator | 2025-05-23 01:39:36 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:39:39.704147 | orchestrator | 2025-05-23 01:39:39 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:39:39.704254 | orchestrator | 2025-05-23 01:39:39 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:39:42.758861 | orchestrator | 2025-05-23 01:39:42 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:39:42.758965 | orchestrator | 2025-05-23 01:39:42 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:39:45.805253 | orchestrator | 2025-05-23 01:39:45 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:39:45.805361 | orchestrator | 2025-05-23 01:39:45 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:39:48.852687 | orchestrator | 2025-05-23 01:39:48 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:39:48.852789 | orchestrator | 2025-05-23 01:39:48 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:39:51.905243 | orchestrator | 2025-05-23 01:39:51 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:39:51.905343 | orchestrator | 2025-05-23 01:39:51 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:39:54.956585 | orchestrator | 2025-05-23 01:39:54 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:39:54.956690 | orchestrator | 2025-05-23 01:39:54 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:39:58.009140 | orchestrator | 2025-05-23 01:39:58 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:39:58.009249 | orchestrator | 2025-05-23 01:39:58 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:40:01.058358 | orchestrator | 2025-05-23 01:40:01 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:40:01.058460 | orchestrator | 2025-05-23 01:40:01 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:40:04.109206 | orchestrator | 2025-05-23 01:40:04 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:40:04.109337 | orchestrator | 2025-05-23 01:40:04 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:40:07.165552 | orchestrator | 2025-05-23 01:40:07 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:40:07.165650 | orchestrator | 2025-05-23 01:40:07 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:40:10.214870 | orchestrator | 2025-05-23 01:40:10 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:40:10.214980 | orchestrator | 2025-05-23 01:40:10 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:40:13.265276 | orchestrator | 2025-05-23 01:40:13 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:40:13.265413 | orchestrator | 2025-05-23 01:40:13 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:40:16.311482 | orchestrator | 2025-05-23 01:40:16 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:40:16.311581 | orchestrator | 2025-05-23 01:40:16 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:40:19.363008 | orchestrator | 2025-05-23 01:40:19 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:40:19.363155 | orchestrator | 2025-05-23 01:40:19 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:40:22.413401 | orchestrator | 2025-05-23 01:40:22 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:40:22.413506 | orchestrator | 2025-05-23 01:40:22 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:40:25.464524 | orchestrator | 2025-05-23 01:40:25 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:40:25.464647 | orchestrator | 2025-05-23 01:40:25 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:40:28.511401 | orchestrator | 2025-05-23 01:40:28 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:40:28.511506 | orchestrator | 2025-05-23 01:40:28 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:40:31.557476 | orchestrator | 2025-05-23 01:40:31 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:40:31.557563 | orchestrator | 2025-05-23 01:40:31 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:40:34.613171 | orchestrator | 2025-05-23 01:40:34 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:40:34.613296 | orchestrator | 2025-05-23 01:40:34 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:40:37.658234 | orchestrator | 2025-05-23 01:40:37 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:40:37.658325 | orchestrator | 2025-05-23 01:40:37 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:40:40.713236 | orchestrator | 2025-05-23 01:40:40 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:40:40.713345 | orchestrator | 2025-05-23 01:40:40 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:40:43.767020 | orchestrator | 2025-05-23 01:40:43 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:40:43.767180 | orchestrator | 2025-05-23 01:40:43 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:40:46.818647 | orchestrator | 2025-05-23 01:40:46 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:40:46.818746 | orchestrator | 2025-05-23 01:40:46 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:40:49.867600 | orchestrator | 2025-05-23 01:40:49 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:40:49.867697 | orchestrator | 2025-05-23 01:40:49 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:40:52.909819 | orchestrator | 2025-05-23 01:40:52 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:40:52.909931 | orchestrator | 2025-05-23 01:40:52 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:40:55.963226 | orchestrator | 2025-05-23 01:40:55 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:40:55.963327 | orchestrator | 2025-05-23 01:40:55 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:40:59.009914 | orchestrator | 2025-05-23 01:40:59 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:40:59.010182 | orchestrator | 2025-05-23 01:40:59 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:41:02.057338 | orchestrator | 2025-05-23 01:41:02 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:41:02.057462 | orchestrator | 2025-05-23 01:41:02 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:41:05.106477 | orchestrator | 2025-05-23 01:41:05 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:41:05.106587 | orchestrator | 2025-05-23 01:41:05 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:41:08.153721 | orchestrator | 2025-05-23 01:41:08 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:41:08.153838 | orchestrator | 2025-05-23 01:41:08 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:41:11.203369 | orchestrator | 2025-05-23 01:41:11 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:41:11.203483 | orchestrator | 2025-05-23 01:41:11 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:41:14.257634 | orchestrator | 2025-05-23 01:41:14 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:41:14.257740 | orchestrator | 2025-05-23 01:41:14 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:41:17.316620 | orchestrator | 2025-05-23 01:41:17 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:41:17.316729 | orchestrator | 2025-05-23 01:41:17 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:41:20.371740 | orchestrator | 2025-05-23 01:41:20 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:41:20.371840 | orchestrator | 2025-05-23 01:41:20 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:41:23.428822 | orchestrator | 2025-05-23 01:41:23 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:41:23.428927 | orchestrator | 2025-05-23 01:41:23 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:41:26.484762 | orchestrator | 2025-05-23 01:41:26 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:41:26.484876 | orchestrator | 2025-05-23 01:41:26 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:41:29.533456 | orchestrator | 2025-05-23 01:41:29 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:41:29.533571 | orchestrator | 2025-05-23 01:41:29 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:41:32.590500 | orchestrator | 2025-05-23 01:41:32 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:41:32.590576 | orchestrator | 2025-05-23 01:41:32 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:41:35.642645 | orchestrator | 2025-05-23 01:41:35 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:41:35.642758 | orchestrator | 2025-05-23 01:41:35 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:41:38.691979 | orchestrator | 2025-05-23 01:41:38 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:41:38.692082 | orchestrator | 2025-05-23 01:41:38 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:41:41.748434 | orchestrator | 2025-05-23 01:41:41 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:41:41.748540 | orchestrator | 2025-05-23 01:41:41 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:41:44.797517 | orchestrator | 2025-05-23 01:41:44 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:41:44.797646 | orchestrator | 2025-05-23 01:41:44 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:41:47.849708 | orchestrator | 2025-05-23 01:41:47 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:41:47.849830 | orchestrator | 2025-05-23 01:41:47 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:41:50.892076 | orchestrator | 2025-05-23 01:41:50 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:41:50.892220 | orchestrator | 2025-05-23 01:41:50 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:41:53.944323 | orchestrator | 2025-05-23 01:41:53 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:41:53.944430 | orchestrator | 2025-05-23 01:41:53 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:41:56.999220 | orchestrator | 2025-05-23 01:41:56 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:41:56.999334 | orchestrator | 2025-05-23 01:41:57 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:42:00.050554 | orchestrator | 2025-05-23 01:42:00 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:42:00.050684 | orchestrator | 2025-05-23 01:42:00 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:42:03.094791 | orchestrator | 2025-05-23 01:42:03 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:42:03.094900 | orchestrator | 2025-05-23 01:42:03 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:42:06.138672 | orchestrator | 2025-05-23 01:42:06 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:42:06.138764 | orchestrator | 2025-05-23 01:42:06 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:42:09.195429 | orchestrator | 2025-05-23 01:42:09 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:42:09.195532 | orchestrator | 2025-05-23 01:42:09 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:42:12.242668 | orchestrator | 2025-05-23 01:42:12 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:42:12.242777 | orchestrator | 2025-05-23 01:42:12 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:42:15.295952 | orchestrator | 2025-05-23 01:42:15 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:42:15.296062 | orchestrator | 2025-05-23 01:42:15 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:42:18.347174 | orchestrator | 2025-05-23 01:42:18 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:42:18.347286 | orchestrator | 2025-05-23 01:42:18 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:42:21.390352 | orchestrator | 2025-05-23 01:42:21 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:42:21.390455 | orchestrator | 2025-05-23 01:42:21 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:42:24.441024 | orchestrator | 2025-05-23 01:42:24 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:42:24.441166 | orchestrator | 2025-05-23 01:42:24 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:42:27.494593 | orchestrator | 2025-05-23 01:42:27 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:42:27.494701 | orchestrator | 2025-05-23 01:42:27 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:42:30.542265 | orchestrator | 2025-05-23 01:42:30 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:42:30.542402 | orchestrator | 2025-05-23 01:42:30 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:42:33.587249 | orchestrator | 2025-05-23 01:42:33 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:42:33.587356 | orchestrator | 2025-05-23 01:42:33 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:42:36.641247 | orchestrator | 2025-05-23 01:42:36 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:42:36.642324 | orchestrator | 2025-05-23 01:42:36 | INFO  | Task c648051e-8d44-4241-8962-a2b5ba0c5e88 is in state STARTED 2025-05-23 01:42:36.642358 | orchestrator | 2025-05-23 01:42:36 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:42:39.706553 | orchestrator | 2025-05-23 01:42:39 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:42:39.707559 | orchestrator | 2025-05-23 01:42:39 | INFO  | Task c648051e-8d44-4241-8962-a2b5ba0c5e88 is in state STARTED 2025-05-23 01:42:39.707789 | orchestrator | 2025-05-23 01:42:39 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:42:42.771829 | orchestrator | 2025-05-23 01:42:42 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:42:42.773582 | orchestrator | 2025-05-23 01:42:42 | INFO  | Task c648051e-8d44-4241-8962-a2b5ba0c5e88 is in state STARTED 2025-05-23 01:42:42.773619 | orchestrator | 2025-05-23 01:42:42 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:42:45.829223 | orchestrator | 2025-05-23 01:42:45 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:42:45.831302 | orchestrator | 2025-05-23 01:42:45 | INFO  | Task c648051e-8d44-4241-8962-a2b5ba0c5e88 is in state STARTED 2025-05-23 01:42:45.831472 | orchestrator | 2025-05-23 01:42:45 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:42:48.885964 | orchestrator | 2025-05-23 01:42:48 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:42:48.886983 | orchestrator | 2025-05-23 01:42:48 | INFO  | Task c648051e-8d44-4241-8962-a2b5ba0c5e88 is in state SUCCESS 2025-05-23 01:42:48.887043 | orchestrator | 2025-05-23 01:42:48 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:42:51.940166 | orchestrator | 2025-05-23 01:42:51 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:42:51.940267 | orchestrator | 2025-05-23 01:42:51 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:42:54.991702 | orchestrator | 2025-05-23 01:42:54 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:42:54.991813 | orchestrator | 2025-05-23 01:42:54 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:42:58.045623 | orchestrator | 2025-05-23 01:42:58 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:42:58.045731 | orchestrator | 2025-05-23 01:42:58 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:43:01.084919 | orchestrator | 2025-05-23 01:43:01 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:43:01.085025 | orchestrator | 2025-05-23 01:43:01 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:43:04.131853 | orchestrator | 2025-05-23 01:43:04 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:43:04.131965 | orchestrator | 2025-05-23 01:43:04 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:43:07.173679 | orchestrator | 2025-05-23 01:43:07 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:43:07.173822 | orchestrator | 2025-05-23 01:43:07 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:43:10.219959 | orchestrator | 2025-05-23 01:43:10 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:43:10.220088 | orchestrator | 2025-05-23 01:43:10 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:43:13.269632 | orchestrator | 2025-05-23 01:43:13 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:43:13.269738 | orchestrator | 2025-05-23 01:43:13 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:43:16.311621 | orchestrator | 2025-05-23 01:43:16 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:43:16.311725 | orchestrator | 2025-05-23 01:43:16 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:43:19.362092 | orchestrator | 2025-05-23 01:43:19 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:43:19.362248 | orchestrator | 2025-05-23 01:43:19 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:43:22.412449 | orchestrator | 2025-05-23 01:43:22 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:43:22.412567 | orchestrator | 2025-05-23 01:43:22 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:43:25.458634 | orchestrator | 2025-05-23 01:43:25 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:43:25.458754 | orchestrator | 2025-05-23 01:43:25 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:43:28.510702 | orchestrator | 2025-05-23 01:43:28 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:43:28.510808 | orchestrator | 2025-05-23 01:43:28 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:43:31.554347 | orchestrator | 2025-05-23 01:43:31 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:43:31.554445 | orchestrator | 2025-05-23 01:43:31 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:43:34.601985 | orchestrator | 2025-05-23 01:43:34 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:43:34.602143 | orchestrator | 2025-05-23 01:43:34 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:43:37.651798 | orchestrator | 2025-05-23 01:43:37 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:43:37.651943 | orchestrator | 2025-05-23 01:43:37 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:43:40.703642 | orchestrator | 2025-05-23 01:43:40 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:43:40.703744 | orchestrator | 2025-05-23 01:43:40 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:43:43.759675 | orchestrator | 2025-05-23 01:43:43 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:43:43.759783 | orchestrator | 2025-05-23 01:43:43 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:43:46.813570 | orchestrator | 2025-05-23 01:43:46 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:43:46.813700 | orchestrator | 2025-05-23 01:43:46 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:43:49.863265 | orchestrator | 2025-05-23 01:43:49 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:43:49.863378 | orchestrator | 2025-05-23 01:43:49 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:43:52.917847 | orchestrator | 2025-05-23 01:43:52 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:43:52.917977 | orchestrator | 2025-05-23 01:43:52 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:43:55.967549 | orchestrator | 2025-05-23 01:43:55 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:43:55.967644 | orchestrator | 2025-05-23 01:43:55 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:43:59.019539 | orchestrator | 2025-05-23 01:43:59 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:43:59.019649 | orchestrator | 2025-05-23 01:43:59 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:44:02.061567 | orchestrator | 2025-05-23 01:44:02 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:44:02.061669 | orchestrator | 2025-05-23 01:44:02 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:44:05.105323 | orchestrator | 2025-05-23 01:44:05 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:44:05.105433 | orchestrator | 2025-05-23 01:44:05 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:44:08.156298 | orchestrator | 2025-05-23 01:44:08 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:44:08.156403 | orchestrator | 2025-05-23 01:44:08 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:44:11.203867 | orchestrator | 2025-05-23 01:44:11 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:44:11.203971 | orchestrator | 2025-05-23 01:44:11 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:44:14.253992 | orchestrator | 2025-05-23 01:44:14 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:44:14.254159 | orchestrator | 2025-05-23 01:44:14 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:44:17.304060 | orchestrator | 2025-05-23 01:44:17 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:44:17.304168 | orchestrator | 2025-05-23 01:44:17 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:44:20.352015 | orchestrator | 2025-05-23 01:44:20 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:44:20.352121 | orchestrator | 2025-05-23 01:44:20 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:44:23.402674 | orchestrator | 2025-05-23 01:44:23 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:44:23.402783 | orchestrator | 2025-05-23 01:44:23 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:44:26.450633 | orchestrator | 2025-05-23 01:44:26 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:44:26.450736 | orchestrator | 2025-05-23 01:44:26 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:44:29.497220 | orchestrator | 2025-05-23 01:44:29 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:44:29.497382 | orchestrator | 2025-05-23 01:44:29 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:44:32.554241 | orchestrator | 2025-05-23 01:44:32 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:44:32.554372 | orchestrator | 2025-05-23 01:44:32 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:44:35.598539 | orchestrator | 2025-05-23 01:44:35 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:44:35.598648 | orchestrator | 2025-05-23 01:44:35 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:44:38.642381 | orchestrator | 2025-05-23 01:44:38 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:44:38.642518 | orchestrator | 2025-05-23 01:44:38 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:44:41.686806 | orchestrator | 2025-05-23 01:44:41 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:44:41.686945 | orchestrator | 2025-05-23 01:44:41 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:44:44.732569 | orchestrator | 2025-05-23 01:44:44 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:44:44.732689 | orchestrator | 2025-05-23 01:44:44 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:44:47.784407 | orchestrator | 2025-05-23 01:44:47 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:44:47.784509 | orchestrator | 2025-05-23 01:44:47 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:44:50.830965 | orchestrator | 2025-05-23 01:44:50 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:44:50.831075 | orchestrator | 2025-05-23 01:44:50 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:44:53.877202 | orchestrator | 2025-05-23 01:44:53 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:44:53.877279 | orchestrator | 2025-05-23 01:44:53 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:44:56.925290 | orchestrator | 2025-05-23 01:44:56 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:44:56.925420 | orchestrator | 2025-05-23 01:44:56 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:44:59.971717 | orchestrator | 2025-05-23 01:44:59 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:44:59.971806 | orchestrator | 2025-05-23 01:44:59 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:45:03.016935 | orchestrator | 2025-05-23 01:45:03 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:45:03.017024 | orchestrator | 2025-05-23 01:45:03 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:45:06.068204 | orchestrator | 2025-05-23 01:45:06 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:45:06.068309 | orchestrator | 2025-05-23 01:45:06 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:45:09.105306 | orchestrator | 2025-05-23 01:45:09 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:45:09.105470 | orchestrator | 2025-05-23 01:45:09 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:45:12.158315 | orchestrator | 2025-05-23 01:45:12 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:45:12.158475 | orchestrator | 2025-05-23 01:45:12 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:45:15.207432 | orchestrator | 2025-05-23 01:45:15 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:45:15.207537 | orchestrator | 2025-05-23 01:45:15 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:45:18.261955 | orchestrator | 2025-05-23 01:45:18 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:45:18.262109 | orchestrator | 2025-05-23 01:45:18 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:45:21.304662 | orchestrator | 2025-05-23 01:45:21 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:45:21.304750 | orchestrator | 2025-05-23 01:45:21 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:45:24.354519 | orchestrator | 2025-05-23 01:45:24 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:45:24.354630 | orchestrator | 2025-05-23 01:45:24 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:45:27.401480 | orchestrator | 2025-05-23 01:45:27 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:45:27.401588 | orchestrator | 2025-05-23 01:45:27 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:45:30.451044 | orchestrator | 2025-05-23 01:45:30 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:45:30.451324 | orchestrator | 2025-05-23 01:45:30 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:45:33.511491 | orchestrator | 2025-05-23 01:45:33 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:45:33.511595 | orchestrator | 2025-05-23 01:45:33 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:45:36.559896 | orchestrator | 2025-05-23 01:45:36 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:45:36.560000 | orchestrator | 2025-05-23 01:45:36 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:45:39.607785 | orchestrator | 2025-05-23 01:45:39 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:45:39.607888 | orchestrator | 2025-05-23 01:45:39 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:45:42.658087 | orchestrator | 2025-05-23 01:45:42 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:45:42.658212 | orchestrator | 2025-05-23 01:45:42 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:45:45.704384 | orchestrator | 2025-05-23 01:45:45 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:45:45.704535 | orchestrator | 2025-05-23 01:45:45 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:45:48.750141 | orchestrator | 2025-05-23 01:45:48 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:45:48.750248 | orchestrator | 2025-05-23 01:45:48 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:45:51.798229 | orchestrator | 2025-05-23 01:45:51 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:45:51.798325 | orchestrator | 2025-05-23 01:45:51 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:45:54.849958 | orchestrator | 2025-05-23 01:45:54 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:45:54.850089 | orchestrator | 2025-05-23 01:45:54 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:45:57.900644 | orchestrator | 2025-05-23 01:45:57 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:45:57.900745 | orchestrator | 2025-05-23 01:45:57 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:46:00.944694 | orchestrator | 2025-05-23 01:46:00 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:46:00.944807 | orchestrator | 2025-05-23 01:46:00 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:46:03.997853 | orchestrator | 2025-05-23 01:46:03 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:46:03.997959 | orchestrator | 2025-05-23 01:46:03 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:46:07.053722 | orchestrator | 2025-05-23 01:46:07 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:46:07.053819 | orchestrator | 2025-05-23 01:46:07 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:46:10.106155 | orchestrator | 2025-05-23 01:46:10 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:46:10.106286 | orchestrator | 2025-05-23 01:46:10 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:46:13.159868 | orchestrator | 2025-05-23 01:46:13 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:46:13.159974 | orchestrator | 2025-05-23 01:46:13 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:46:16.207940 | orchestrator | 2025-05-23 01:46:16 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:46:16.208041 | orchestrator | 2025-05-23 01:46:16 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:46:19.255836 | orchestrator | 2025-05-23 01:46:19 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:46:19.255944 | orchestrator | 2025-05-23 01:46:19 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:46:22.308041 | orchestrator | 2025-05-23 01:46:22 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:46:22.308145 | orchestrator | 2025-05-23 01:46:22 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:46:25.359433 | orchestrator | 2025-05-23 01:46:25 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:46:25.359597 | orchestrator | 2025-05-23 01:46:25 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:46:28.403249 | orchestrator | 2025-05-23 01:46:28 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:46:28.403338 | orchestrator | 2025-05-23 01:46:28 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:46:31.460232 | orchestrator | 2025-05-23 01:46:31 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:46:31.460348 | orchestrator | 2025-05-23 01:46:31 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:46:34.506136 | orchestrator | 2025-05-23 01:46:34 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:46:34.506310 | orchestrator | 2025-05-23 01:46:34 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:46:37.562648 | orchestrator | 2025-05-23 01:46:37 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:46:37.562716 | orchestrator | 2025-05-23 01:46:37 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:46:40.613984 | orchestrator | 2025-05-23 01:46:40 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:46:40.614166 | orchestrator | 2025-05-23 01:46:40 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:46:43.662164 | orchestrator | 2025-05-23 01:46:43 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:46:43.662274 | orchestrator | 2025-05-23 01:46:43 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:46:46.712497 | orchestrator | 2025-05-23 01:46:46 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:46:46.712643 | orchestrator | 2025-05-23 01:46:46 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:46:49.761262 | orchestrator | 2025-05-23 01:46:49 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:46:49.761375 | orchestrator | 2025-05-23 01:46:49 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:46:52.815839 | orchestrator | 2025-05-23 01:46:52 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:46:52.815969 | orchestrator | 2025-05-23 01:46:52 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:46:55.860328 | orchestrator | 2025-05-23 01:46:55 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:46:55.860469 | orchestrator | 2025-05-23 01:46:55 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:46:58.912149 | orchestrator | 2025-05-23 01:46:58 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:46:58.912270 | orchestrator | 2025-05-23 01:46:58 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:47:01.960411 | orchestrator | 2025-05-23 01:47:01 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:47:01.960517 | orchestrator | 2025-05-23 01:47:01 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:47:05.015658 | orchestrator | 2025-05-23 01:47:05 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:47:05.015761 | orchestrator | 2025-05-23 01:47:05 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:47:08.066139 | orchestrator | 2025-05-23 01:47:08 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:47:08.067429 | orchestrator | 2025-05-23 01:47:08 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:47:11.109672 | orchestrator | 2025-05-23 01:47:11 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:47:11.109792 | orchestrator | 2025-05-23 01:47:11 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:47:14.161633 | orchestrator | 2025-05-23 01:47:14 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:47:14.161758 | orchestrator | 2025-05-23 01:47:14 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:47:17.216179 | orchestrator | 2025-05-23 01:47:17 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:47:17.216281 | orchestrator | 2025-05-23 01:47:17 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:47:20.267853 | orchestrator | 2025-05-23 01:47:20 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:47:20.267963 | orchestrator | 2025-05-23 01:47:20 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:47:23.317180 | orchestrator | 2025-05-23 01:47:23 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:47:23.317306 | orchestrator | 2025-05-23 01:47:23 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:47:26.366445 | orchestrator | 2025-05-23 01:47:26 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:47:26.366556 | orchestrator | 2025-05-23 01:47:26 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:47:29.415035 | orchestrator | 2025-05-23 01:47:29 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:47:29.415131 | orchestrator | 2025-05-23 01:47:29 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:47:32.459403 | orchestrator | 2025-05-23 01:47:32 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:47:32.459510 | orchestrator | 2025-05-23 01:47:32 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:47:35.508929 | orchestrator | 2025-05-23 01:47:35 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:47:35.509041 | orchestrator | 2025-05-23 01:47:35 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:47:38.561090 | orchestrator | 2025-05-23 01:47:38 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:47:38.561204 | orchestrator | 2025-05-23 01:47:38 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:47:41.608504 | orchestrator | 2025-05-23 01:47:41 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:47:41.608706 | orchestrator | 2025-05-23 01:47:41 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:47:44.662795 | orchestrator | 2025-05-23 01:47:44 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:47:44.662907 | orchestrator | 2025-05-23 01:47:44 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:47:47.712923 | orchestrator | 2025-05-23 01:47:47 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:47:47.713024 | orchestrator | 2025-05-23 01:47:47 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:47:50.764286 | orchestrator | 2025-05-23 01:47:50 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:47:50.764405 | orchestrator | 2025-05-23 01:47:50 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:47:53.811122 | orchestrator | 2025-05-23 01:47:53 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:47:53.811235 | orchestrator | 2025-05-23 01:47:53 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:47:56.860830 | orchestrator | 2025-05-23 01:47:56 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:47:56.860935 | orchestrator | 2025-05-23 01:47:56 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:47:59.905781 | orchestrator | 2025-05-23 01:47:59 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:47:59.905880 | orchestrator | 2025-05-23 01:47:59 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:48:02.954321 | orchestrator | 2025-05-23 01:48:02 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:48:02.954433 | orchestrator | 2025-05-23 01:48:02 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:48:05.999744 | orchestrator | 2025-05-23 01:48:06 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:48:05.999844 | orchestrator | 2025-05-23 01:48:06 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:48:09.054887 | orchestrator | 2025-05-23 01:48:09 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:48:09.054995 | orchestrator | 2025-05-23 01:48:09 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:48:12.104967 | orchestrator | 2025-05-23 01:48:12 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:48:12.105066 | orchestrator | 2025-05-23 01:48:12 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:48:15.155353 | orchestrator | 2025-05-23 01:48:15 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:48:15.155550 | orchestrator | 2025-05-23 01:48:15 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:48:18.204718 | orchestrator | 2025-05-23 01:48:18 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:48:18.204808 | orchestrator | 2025-05-23 01:48:18 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:48:21.253446 | orchestrator | 2025-05-23 01:48:21 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:48:21.253618 | orchestrator | 2025-05-23 01:48:21 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:48:24.303156 | orchestrator | 2025-05-23 01:48:24 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:48:24.303261 | orchestrator | 2025-05-23 01:48:24 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:48:27.346899 | orchestrator | 2025-05-23 01:48:27 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:48:27.347012 | orchestrator | 2025-05-23 01:48:27 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:48:30.399290 | orchestrator | 2025-05-23 01:48:30 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:48:30.399416 | orchestrator | 2025-05-23 01:48:30 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:48:33.455086 | orchestrator | 2025-05-23 01:48:33 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:48:33.455189 | orchestrator | 2025-05-23 01:48:33 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:48:36.501261 | orchestrator | 2025-05-23 01:48:36 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:48:36.501389 | orchestrator | 2025-05-23 01:48:36 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:48:39.551548 | orchestrator | 2025-05-23 01:48:39 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:48:39.551652 | orchestrator | 2025-05-23 01:48:39 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:48:42.608557 | orchestrator | 2025-05-23 01:48:42 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:48:42.608657 | orchestrator | 2025-05-23 01:48:42 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:48:45.655794 | orchestrator | 2025-05-23 01:48:45 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:48:45.655910 | orchestrator | 2025-05-23 01:48:45 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:48:48.707509 | orchestrator | 2025-05-23 01:48:48 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:48:48.707783 | orchestrator | 2025-05-23 01:48:48 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:48:51.759071 | orchestrator | 2025-05-23 01:48:51 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:48:51.759174 | orchestrator | 2025-05-23 01:48:51 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:48:54.803390 | orchestrator | 2025-05-23 01:48:54 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:48:54.803517 | orchestrator | 2025-05-23 01:48:54 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:48:57.851880 | orchestrator | 2025-05-23 01:48:57 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:48:57.851996 | orchestrator | 2025-05-23 01:48:57 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:49:00.903040 | orchestrator | 2025-05-23 01:49:00 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:49:00.903150 | orchestrator | 2025-05-23 01:49:00 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:49:03.954508 | orchestrator | 2025-05-23 01:49:03 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:49:03.954633 | orchestrator | 2025-05-23 01:49:03 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:49:07.006277 | orchestrator | 2025-05-23 01:49:07 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:49:07.006380 | orchestrator | 2025-05-23 01:49:07 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:49:10.053741 | orchestrator | 2025-05-23 01:49:10 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:49:10.053930 | orchestrator | 2025-05-23 01:49:10 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:49:13.110968 | orchestrator | 2025-05-23 01:49:13 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:49:13.111070 | orchestrator | 2025-05-23 01:49:13 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:49:16.158468 | orchestrator | 2025-05-23 01:49:16 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:49:16.158572 | orchestrator | 2025-05-23 01:49:16 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:49:19.205614 | orchestrator | 2025-05-23 01:49:19 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:49:19.205722 | orchestrator | 2025-05-23 01:49:19 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:49:22.255644 | orchestrator | 2025-05-23 01:49:22 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:49:22.255743 | orchestrator | 2025-05-23 01:49:22 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:49:25.302325 | orchestrator | 2025-05-23 01:49:25 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:49:25.302449 | orchestrator | 2025-05-23 01:49:25 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:49:28.356009 | orchestrator | 2025-05-23 01:49:28 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:49:28.356113 | orchestrator | 2025-05-23 01:49:28 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:49:31.402416 | orchestrator | 2025-05-23 01:49:31 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:49:31.402527 | orchestrator | 2025-05-23 01:49:31 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:49:34.455541 | orchestrator | 2025-05-23 01:49:34 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:49:34.455647 | orchestrator | 2025-05-23 01:49:34 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:49:37.507268 | orchestrator | 2025-05-23 01:49:37 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:49:37.507378 | orchestrator | 2025-05-23 01:49:37 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:49:40.557639 | orchestrator | 2025-05-23 01:49:40 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:49:40.557773 | orchestrator | 2025-05-23 01:49:40 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:49:43.611308 | orchestrator | 2025-05-23 01:49:43 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:49:43.611447 | orchestrator | 2025-05-23 01:49:43 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:49:46.654556 | orchestrator | 2025-05-23 01:49:46 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:49:46.654659 | orchestrator | 2025-05-23 01:49:46 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:49:49.707224 | orchestrator | 2025-05-23 01:49:49 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:49:49.707338 | orchestrator | 2025-05-23 01:49:49 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:49:52.757316 | orchestrator | 2025-05-23 01:49:52 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:49:52.757434 | orchestrator | 2025-05-23 01:49:52 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:49:55.806805 | orchestrator | 2025-05-23 01:49:55 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:49:55.807023 | orchestrator | 2025-05-23 01:49:55 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:49:58.857113 | orchestrator | 2025-05-23 01:49:58 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:49:58.857254 | orchestrator | 2025-05-23 01:49:58 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:50:01.900363 | orchestrator | 2025-05-23 01:50:01 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:50:01.900513 | orchestrator | 2025-05-23 01:50:01 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:50:04.950607 | orchestrator | 2025-05-23 01:50:04 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:50:04.950741 | orchestrator | 2025-05-23 01:50:04 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:50:08.002454 | orchestrator | 2025-05-23 01:50:08 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:50:08.002567 | orchestrator | 2025-05-23 01:50:08 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:50:11.051315 | orchestrator | 2025-05-23 01:50:11 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:50:11.051422 | orchestrator | 2025-05-23 01:50:11 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:50:14.106572 | orchestrator | 2025-05-23 01:50:14 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:50:14.106707 | orchestrator | 2025-05-23 01:50:14 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:50:17.156972 | orchestrator | 2025-05-23 01:50:17 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:50:17.157103 | orchestrator | 2025-05-23 01:50:17 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:50:20.206527 | orchestrator | 2025-05-23 01:50:20 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:50:20.206635 | orchestrator | 2025-05-23 01:50:20 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:50:23.254350 | orchestrator | 2025-05-23 01:50:23 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:50:23.254457 | orchestrator | 2025-05-23 01:50:23 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:50:26.303506 | orchestrator | 2025-05-23 01:50:26 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:50:26.303619 | orchestrator | 2025-05-23 01:50:26 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:50:29.352492 | orchestrator | 2025-05-23 01:50:29 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:50:29.352575 | orchestrator | 2025-05-23 01:50:29 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:50:32.399827 | orchestrator | 2025-05-23 01:50:32 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:50:32.400029 | orchestrator | 2025-05-23 01:50:32 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:50:35.457169 | orchestrator | 2025-05-23 01:50:35 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:50:35.457280 | orchestrator | 2025-05-23 01:50:35 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:50:38.504329 | orchestrator | 2025-05-23 01:50:38 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:50:38.504435 | orchestrator | 2025-05-23 01:50:38 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:50:41.552151 | orchestrator | 2025-05-23 01:50:41 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:50:41.552264 | orchestrator | 2025-05-23 01:50:41 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:50:44.597248 | orchestrator | 2025-05-23 01:50:44 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:50:44.597353 | orchestrator | 2025-05-23 01:50:44 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:50:47.648039 | orchestrator | 2025-05-23 01:50:47 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:50:47.648146 | orchestrator | 2025-05-23 01:50:47 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:50:50.696143 | orchestrator | 2025-05-23 01:50:50 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:50:50.696251 | orchestrator | 2025-05-23 01:50:50 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:50:53.749081 | orchestrator | 2025-05-23 01:50:53 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:50:53.749250 | orchestrator | 2025-05-23 01:50:53 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:50:56.809693 | orchestrator | 2025-05-23 01:50:56 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:50:56.809789 | orchestrator | 2025-05-23 01:50:56 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:50:59.853191 | orchestrator | 2025-05-23 01:50:59 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:50:59.853298 | orchestrator | 2025-05-23 01:50:59 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:51:02.903647 | orchestrator | 2025-05-23 01:51:02 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:51:02.903766 | orchestrator | 2025-05-23 01:51:02 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:51:05.950693 | orchestrator | 2025-05-23 01:51:05 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:51:05.950797 | orchestrator | 2025-05-23 01:51:05 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:51:08.998544 | orchestrator | 2025-05-23 01:51:08 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:51:08.998653 | orchestrator | 2025-05-23 01:51:08 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:51:12.054556 | orchestrator | 2025-05-23 01:51:12 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:51:12.054656 | orchestrator | 2025-05-23 01:51:12 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:51:15.103147 | orchestrator | 2025-05-23 01:51:15 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:51:15.103261 | orchestrator | 2025-05-23 01:51:15 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:51:18.156033 | orchestrator | 2025-05-23 01:51:18 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:51:18.156159 | orchestrator | 2025-05-23 01:51:18 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:51:21.205702 | orchestrator | 2025-05-23 01:51:21 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:51:21.205804 | orchestrator | 2025-05-23 01:51:21 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:51:24.256523 | orchestrator | 2025-05-23 01:51:24 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:51:24.256623 | orchestrator | 2025-05-23 01:51:24 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:51:27.312193 | orchestrator | 2025-05-23 01:51:27 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:51:27.312299 | orchestrator | 2025-05-23 01:51:27 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:51:30.361445 | orchestrator | 2025-05-23 01:51:30 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:51:30.361536 | orchestrator | 2025-05-23 01:51:30 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:51:33.410253 | orchestrator | 2025-05-23 01:51:33 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:51:33.410367 | orchestrator | 2025-05-23 01:51:33 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:51:36.471046 | orchestrator | 2025-05-23 01:51:36 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:51:36.471149 | orchestrator | 2025-05-23 01:51:36 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:51:39.517941 | orchestrator | 2025-05-23 01:51:39 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:51:39.518141 | orchestrator | 2025-05-23 01:51:39 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:51:42.565199 | orchestrator | 2025-05-23 01:51:42 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:51:42.565298 | orchestrator | 2025-05-23 01:51:42 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:51:45.618784 | orchestrator | 2025-05-23 01:51:45 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:51:45.618888 | orchestrator | 2025-05-23 01:51:45 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:51:48.676509 | orchestrator | 2025-05-23 01:51:48 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:51:48.676613 | orchestrator | 2025-05-23 01:51:48 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:51:51.724440 | orchestrator | 2025-05-23 01:51:51 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:51:51.724568 | orchestrator | 2025-05-23 01:51:51 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:51:54.775100 | orchestrator | 2025-05-23 01:51:54 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:51:54.775285 | orchestrator | 2025-05-23 01:51:54 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:51:57.829434 | orchestrator | 2025-05-23 01:51:57 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:51:57.829631 | orchestrator | 2025-05-23 01:51:57 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:52:00.887105 | orchestrator | 2025-05-23 01:52:00 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:52:00.887215 | orchestrator | 2025-05-23 01:52:00 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:52:03.937968 | orchestrator | 2025-05-23 01:52:03 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:52:03.938186 | orchestrator | 2025-05-23 01:52:03 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:52:06.993283 | orchestrator | 2025-05-23 01:52:06 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:52:06.993387 | orchestrator | 2025-05-23 01:52:06 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:52:10.059327 | orchestrator | 2025-05-23 01:52:10 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:52:10.059464 | orchestrator | 2025-05-23 01:52:10 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:52:13.108838 | orchestrator | 2025-05-23 01:52:13 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:52:13.108945 | orchestrator | 2025-05-23 01:52:13 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:52:16.177190 | orchestrator | 2025-05-23 01:52:16 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:52:16.177300 | orchestrator | 2025-05-23 01:52:16 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:52:19.232387 | orchestrator | 2025-05-23 01:52:19 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:52:19.232491 | orchestrator | 2025-05-23 01:52:19 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:52:22.289237 | orchestrator | 2025-05-23 01:52:22 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:52:22.289344 | orchestrator | 2025-05-23 01:52:22 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:52:25.341107 | orchestrator | 2025-05-23 01:52:25 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:52:25.341211 | orchestrator | 2025-05-23 01:52:25 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:52:28.384925 | orchestrator | 2025-05-23 01:52:28 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:52:28.385082 | orchestrator | 2025-05-23 01:52:28 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:52:31.438403 | orchestrator | 2025-05-23 01:52:31 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:52:31.438513 | orchestrator | 2025-05-23 01:52:31 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:52:34.490789 | orchestrator | 2025-05-23 01:52:34 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:52:34.490898 | orchestrator | 2025-05-23 01:52:34 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:52:37.556362 | orchestrator | 2025-05-23 01:52:37 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:52:37.558214 | orchestrator | 2025-05-23 01:52:37 | INFO  | Task 4ea771b6-5c20-44a3-afce-3d4a5543f63d is in state STARTED 2025-05-23 01:52:37.558278 | orchestrator | 2025-05-23 01:52:37 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:52:40.615903 | orchestrator | 2025-05-23 01:52:40 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:52:40.617605 | orchestrator | 2025-05-23 01:52:40 | INFO  | Task 4ea771b6-5c20-44a3-afce-3d4a5543f63d is in state STARTED 2025-05-23 01:52:40.617970 | orchestrator | 2025-05-23 01:52:40 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:52:43.677519 | orchestrator | 2025-05-23 01:52:43 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:52:43.677693 | orchestrator | 2025-05-23 01:52:43 | INFO  | Task 4ea771b6-5c20-44a3-afce-3d4a5543f63d is in state STARTED 2025-05-23 01:52:43.679346 | orchestrator | 2025-05-23 01:52:43 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:52:46.735601 | orchestrator | 2025-05-23 01:52:46 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:52:46.737665 | orchestrator | 2025-05-23 01:52:46 | INFO  | Task 4ea771b6-5c20-44a3-afce-3d4a5543f63d is in state STARTED 2025-05-23 01:52:46.737698 | orchestrator | 2025-05-23 01:52:46 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:52:49.782667 | orchestrator | 2025-05-23 01:52:49 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:52:49.783840 | orchestrator | 2025-05-23 01:52:49 | INFO  | Task 4ea771b6-5c20-44a3-afce-3d4a5543f63d is in state SUCCESS 2025-05-23 01:52:49.783971 | orchestrator | 2025-05-23 01:52:49 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:52:52.825806 | orchestrator | 2025-05-23 01:52:52 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:52:52.825929 | orchestrator | 2025-05-23 01:52:52 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:52:55.873710 | orchestrator | 2025-05-23 01:52:55 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:52:55.873964 | orchestrator | 2025-05-23 01:52:55 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:52:58.922580 | orchestrator | 2025-05-23 01:52:58 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:52:58.922681 | orchestrator | 2025-05-23 01:52:58 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:53:01.973561 | orchestrator | 2025-05-23 01:53:01 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:53:01.973666 | orchestrator | 2025-05-23 01:53:01 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:53:05.022974 | orchestrator | 2025-05-23 01:53:05 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:53:05.023136 | orchestrator | 2025-05-23 01:53:05 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:53:08.070756 | orchestrator | 2025-05-23 01:53:08 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:53:08.070858 | orchestrator | 2025-05-23 01:53:08 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:53:11.118231 | orchestrator | 2025-05-23 01:53:11 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:53:11.118348 | orchestrator | 2025-05-23 01:53:11 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:53:14.162609 | orchestrator | 2025-05-23 01:53:14 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:53:14.162718 | orchestrator | 2025-05-23 01:53:14 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:53:17.215167 | orchestrator | 2025-05-23 01:53:17 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:53:17.215295 | orchestrator | 2025-05-23 01:53:17 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:53:20.262729 | orchestrator | 2025-05-23 01:53:20 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:53:20.262831 | orchestrator | 2025-05-23 01:53:20 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:53:23.308610 | orchestrator | 2025-05-23 01:53:23 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:53:23.309313 | orchestrator | 2025-05-23 01:53:23 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:53:26.357838 | orchestrator | 2025-05-23 01:53:26 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:53:26.357938 | orchestrator | 2025-05-23 01:53:26 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:53:29.401377 | orchestrator | 2025-05-23 01:53:29 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:53:29.401485 | orchestrator | 2025-05-23 01:53:29 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:53:32.446891 | orchestrator | 2025-05-23 01:53:32 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:53:32.446997 | orchestrator | 2025-05-23 01:53:32 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:53:35.500159 | orchestrator | 2025-05-23 01:53:35 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:53:35.500269 | orchestrator | 2025-05-23 01:53:35 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:53:38.549564 | orchestrator | 2025-05-23 01:53:38 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:53:38.549663 | orchestrator | 2025-05-23 01:53:38 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:53:41.600417 | orchestrator | 2025-05-23 01:53:41 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:53:41.600531 | orchestrator | 2025-05-23 01:53:41 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:53:44.651404 | orchestrator | 2025-05-23 01:53:44 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:53:44.651509 | orchestrator | 2025-05-23 01:53:44 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:53:47.700784 | orchestrator | 2025-05-23 01:53:47 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:53:47.700889 | orchestrator | 2025-05-23 01:53:47 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:53:50.752579 | orchestrator | 2025-05-23 01:53:50 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:53:50.752682 | orchestrator | 2025-05-23 01:53:50 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:53:53.799491 | orchestrator | 2025-05-23 01:53:53 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:53:53.799602 | orchestrator | 2025-05-23 01:53:53 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:53:56.846177 | orchestrator | 2025-05-23 01:53:56 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:53:56.846296 | orchestrator | 2025-05-23 01:53:56 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:53:59.896465 | orchestrator | 2025-05-23 01:53:59 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:53:59.896568 | orchestrator | 2025-05-23 01:53:59 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:54:02.940359 | orchestrator | 2025-05-23 01:54:02 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:54:02.940471 | orchestrator | 2025-05-23 01:54:02 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:54:05.987834 | orchestrator | 2025-05-23 01:54:05 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:54:05.987935 | orchestrator | 2025-05-23 01:54:05 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:54:09.039101 | orchestrator | 2025-05-23 01:54:09 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:54:09.039264 | orchestrator | 2025-05-23 01:54:09 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:54:12.087507 | orchestrator | 2025-05-23 01:54:12 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:54:12.087614 | orchestrator | 2025-05-23 01:54:12 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:54:15.139229 | orchestrator | 2025-05-23 01:54:15 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:54:15.139358 | orchestrator | 2025-05-23 01:54:15 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:54:18.192203 | orchestrator | 2025-05-23 01:54:18 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:54:18.192313 | orchestrator | 2025-05-23 01:54:18 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:54:21.251551 | orchestrator | 2025-05-23 01:54:21 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:54:21.251690 | orchestrator | 2025-05-23 01:54:21 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:54:24.308798 | orchestrator | 2025-05-23 01:54:24 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:54:24.309311 | orchestrator | 2025-05-23 01:54:24 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:54:27.359724 | orchestrator | 2025-05-23 01:54:27 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:54:27.359832 | orchestrator | 2025-05-23 01:54:27 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:54:30.423589 | orchestrator | 2025-05-23 01:54:30 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:54:30.423661 | orchestrator | 2025-05-23 01:54:30 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:54:33.469629 | orchestrator | 2025-05-23 01:54:33 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:54:33.469731 | orchestrator | 2025-05-23 01:54:33 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:54:36.517717 | orchestrator | 2025-05-23 01:54:36 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:54:36.517803 | orchestrator | 2025-05-23 01:54:36 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:54:39.569832 | orchestrator | 2025-05-23 01:54:39 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:54:39.569920 | orchestrator | 2025-05-23 01:54:39 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:54:42.618403 | orchestrator | 2025-05-23 01:54:42 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:54:42.618517 | orchestrator | 2025-05-23 01:54:42 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:54:45.659432 | orchestrator | 2025-05-23 01:54:45 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:54:45.659537 | orchestrator | 2025-05-23 01:54:45 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:54:48.708493 | orchestrator | 2025-05-23 01:54:48 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:54:48.708621 | orchestrator | 2025-05-23 01:54:48 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:54:51.756501 | orchestrator | 2025-05-23 01:54:51 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:54:51.756630 | orchestrator | 2025-05-23 01:54:51 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:54:54.809889 | orchestrator | 2025-05-23 01:54:54 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:54:54.810079 | orchestrator | 2025-05-23 01:54:54 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:54:57.856000 | orchestrator | 2025-05-23 01:54:57 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:54:57.856097 | orchestrator | 2025-05-23 01:54:57 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:55:00.909870 | orchestrator | 2025-05-23 01:55:00 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:55:00.909978 | orchestrator | 2025-05-23 01:55:00 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:55:03.956283 | orchestrator | 2025-05-23 01:55:03 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:55:03.956389 | orchestrator | 2025-05-23 01:55:03 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:55:06.996685 | orchestrator | 2025-05-23 01:55:06 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:55:06.996821 | orchestrator | 2025-05-23 01:55:06 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:55:10.049770 | orchestrator | 2025-05-23 01:55:10 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:55:10.049870 | orchestrator | 2025-05-23 01:55:10 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:55:13.106006 | orchestrator | 2025-05-23 01:55:13 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:55:13.106265 | orchestrator | 2025-05-23 01:55:13 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:55:16.155093 | orchestrator | 2025-05-23 01:55:16 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:55:16.155221 | orchestrator | 2025-05-23 01:55:16 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:55:19.207636 | orchestrator | 2025-05-23 01:55:19 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:55:19.207830 | orchestrator | 2025-05-23 01:55:19 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:55:22.258377 | orchestrator | 2025-05-23 01:55:22 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:55:22.258467 | orchestrator | 2025-05-23 01:55:22 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:55:25.310331 | orchestrator | 2025-05-23 01:55:25 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:55:25.310457 | orchestrator | 2025-05-23 01:55:25 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:55:28.361435 | orchestrator | 2025-05-23 01:55:28 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:55:28.361542 | orchestrator | 2025-05-23 01:55:28 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:55:31.413336 | orchestrator | 2025-05-23 01:55:31 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:55:31.413441 | orchestrator | 2025-05-23 01:55:31 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:55:34.464812 | orchestrator | 2025-05-23 01:55:34 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:55:34.464918 | orchestrator | 2025-05-23 01:55:34 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:55:37.520972 | orchestrator | 2025-05-23 01:55:37 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:55:37.521075 | orchestrator | 2025-05-23 01:55:37 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:55:40.566391 | orchestrator | 2025-05-23 01:55:40 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:55:40.566486 | orchestrator | 2025-05-23 01:55:40 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:55:43.617151 | orchestrator | 2025-05-23 01:55:43 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:55:43.617302 | orchestrator | 2025-05-23 01:55:43 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:55:46.663387 | orchestrator | 2025-05-23 01:55:46 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:55:46.663509 | orchestrator | 2025-05-23 01:55:46 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:55:49.711469 | orchestrator | 2025-05-23 01:55:49 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:55:49.711578 | orchestrator | 2025-05-23 01:55:49 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:55:52.760076 | orchestrator | 2025-05-23 01:55:52 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:55:52.760176 | orchestrator | 2025-05-23 01:55:52 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:55:55.808969 | orchestrator | 2025-05-23 01:55:55 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:55:55.809163 | orchestrator | 2025-05-23 01:55:55 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:55:58.867834 | orchestrator | 2025-05-23 01:55:58 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:55:58.867948 | orchestrator | 2025-05-23 01:55:58 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:56:01.921695 | orchestrator | 2025-05-23 01:56:01 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:56:01.921810 | orchestrator | 2025-05-23 01:56:01 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:56:04.967025 | orchestrator | 2025-05-23 01:56:04 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:56:04.967132 | orchestrator | 2025-05-23 01:56:04 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:56:08.015523 | orchestrator | 2025-05-23 01:56:08 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:56:08.015653 | orchestrator | 2025-05-23 01:56:08 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:56:11.060859 | orchestrator | 2025-05-23 01:56:11 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:56:11.060979 | orchestrator | 2025-05-23 01:56:11 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:56:14.112422 | orchestrator | 2025-05-23 01:56:14 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:56:14.112553 | orchestrator | 2025-05-23 01:56:14 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:56:17.160328 | orchestrator | 2025-05-23 01:56:17 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:56:17.160436 | orchestrator | 2025-05-23 01:56:17 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:56:20.213024 | orchestrator | 2025-05-23 01:56:20 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:56:20.213128 | orchestrator | 2025-05-23 01:56:20 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:56:23.263380 | orchestrator | 2025-05-23 01:56:23 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:56:23.263492 | orchestrator | 2025-05-23 01:56:23 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:56:26.310919 | orchestrator | 2025-05-23 01:56:26 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:56:26.311033 | orchestrator | 2025-05-23 01:56:26 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:56:29.354867 | orchestrator | 2025-05-23 01:56:29 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:56:29.354988 | orchestrator | 2025-05-23 01:56:29 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:56:32.398011 | orchestrator | 2025-05-23 01:56:32 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:56:32.398206 | orchestrator | 2025-05-23 01:56:32 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:56:35.448374 | orchestrator | 2025-05-23 01:56:35 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:56:35.448495 | orchestrator | 2025-05-23 01:56:35 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:56:38.503033 | orchestrator | 2025-05-23 01:56:38 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:56:38.503134 | orchestrator | 2025-05-23 01:56:38 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:56:41.555270 | orchestrator | 2025-05-23 01:56:41 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:56:41.555429 | orchestrator | 2025-05-23 01:56:41 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:56:44.606601 | orchestrator | 2025-05-23 01:56:44 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:56:44.607588 | orchestrator | 2025-05-23 01:56:44 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:56:47.654902 | orchestrator | 2025-05-23 01:56:47 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:56:47.655010 | orchestrator | 2025-05-23 01:56:47 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:56:50.702213 | orchestrator | 2025-05-23 01:56:50 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:56:50.702416 | orchestrator | 2025-05-23 01:56:50 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:56:53.765603 | orchestrator | 2025-05-23 01:56:53 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:56:53.765707 | orchestrator | 2025-05-23 01:56:53 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:56:56.817624 | orchestrator | 2025-05-23 01:56:56 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:56:56.817744 | orchestrator | 2025-05-23 01:56:56 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:56:59.870837 | orchestrator | 2025-05-23 01:56:59 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:56:59.870950 | orchestrator | 2025-05-23 01:56:59 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:57:02.925392 | orchestrator | 2025-05-23 01:57:02 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:57:02.925493 | orchestrator | 2025-05-23 01:57:02 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:57:05.973865 | orchestrator | 2025-05-23 01:57:05 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:57:05.973971 | orchestrator | 2025-05-23 01:57:05 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:57:09.026465 | orchestrator | 2025-05-23 01:57:09 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:57:09.026580 | orchestrator | 2025-05-23 01:57:09 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:57:12.073912 | orchestrator | 2025-05-23 01:57:12 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:57:12.074135 | orchestrator | 2025-05-23 01:57:12 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:57:15.125570 | orchestrator | 2025-05-23 01:57:15 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:57:15.125682 | orchestrator | 2025-05-23 01:57:15 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:57:18.176980 | orchestrator | 2025-05-23 01:57:18 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:57:18.177111 | orchestrator | 2025-05-23 01:57:18 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:57:21.228523 | orchestrator | 2025-05-23 01:57:21 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:57:21.228644 | orchestrator | 2025-05-23 01:57:21 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:57:24.276146 | orchestrator | 2025-05-23 01:57:24 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:57:24.276264 | orchestrator | 2025-05-23 01:57:24 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:57:27.318665 | orchestrator | 2025-05-23 01:57:27 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:57:27.318774 | orchestrator | 2025-05-23 01:57:27 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:57:30.364676 | orchestrator | 2025-05-23 01:57:30 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:57:30.364774 | orchestrator | 2025-05-23 01:57:30 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:57:33.417494 | orchestrator | 2025-05-23 01:57:33 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:57:33.417585 | orchestrator | 2025-05-23 01:57:33 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:57:36.471685 | orchestrator | 2025-05-23 01:57:36 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:57:36.471790 | orchestrator | 2025-05-23 01:57:36 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:57:39.514143 | orchestrator | 2025-05-23 01:57:39 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:57:39.514250 | orchestrator | 2025-05-23 01:57:39 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:57:42.569988 | orchestrator | 2025-05-23 01:57:42 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:57:42.570158 | orchestrator | 2025-05-23 01:57:42 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:57:45.623208 | orchestrator | 2025-05-23 01:57:45 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:57:45.623422 | orchestrator | 2025-05-23 01:57:45 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:57:48.673666 | orchestrator | 2025-05-23 01:57:48 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:57:48.673772 | orchestrator | 2025-05-23 01:57:48 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:57:51.718728 | orchestrator | 2025-05-23 01:57:51 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:57:51.719461 | orchestrator | 2025-05-23 01:57:51 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:57:54.771701 | orchestrator | 2025-05-23 01:57:54 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:57:54.771805 | orchestrator | 2025-05-23 01:57:54 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:57:57.822927 | orchestrator | 2025-05-23 01:57:57 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:57:57.823024 | orchestrator | 2025-05-23 01:57:57 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:58:00.868975 | orchestrator | 2025-05-23 01:58:00 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:58:00.869077 | orchestrator | 2025-05-23 01:58:00 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:58:03.918701 | orchestrator | 2025-05-23 01:58:03 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:58:03.918776 | orchestrator | 2025-05-23 01:58:03 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:58:06.953201 | orchestrator | 2025-05-23 01:58:06 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:58:06.953302 | orchestrator | 2025-05-23 01:58:06 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:58:10.015692 | orchestrator | 2025-05-23 01:58:10 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:58:10.015801 | orchestrator | 2025-05-23 01:58:10 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:58:13.059839 | orchestrator | 2025-05-23 01:58:13 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:58:13.059942 | orchestrator | 2025-05-23 01:58:13 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:58:16.110142 | orchestrator | 2025-05-23 01:58:16 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:58:16.110258 | orchestrator | 2025-05-23 01:58:16 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:58:19.163144 | orchestrator | 2025-05-23 01:58:19 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:58:19.163217 | orchestrator | 2025-05-23 01:58:19 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:58:22.214117 | orchestrator | 2025-05-23 01:58:22 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:58:22.214249 | orchestrator | 2025-05-23 01:58:22 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:58:25.269491 | orchestrator | 2025-05-23 01:58:25 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:58:25.269594 | orchestrator | 2025-05-23 01:58:25 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:58:28.317235 | orchestrator | 2025-05-23 01:58:28 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:58:28.317350 | orchestrator | 2025-05-23 01:58:28 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:58:31.365166 | orchestrator | 2025-05-23 01:58:31 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:58:31.365273 | orchestrator | 2025-05-23 01:58:31 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:58:34.409834 | orchestrator | 2025-05-23 01:58:34 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:58:34.409939 | orchestrator | 2025-05-23 01:58:34 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:58:37.459309 | orchestrator | 2025-05-23 01:58:37 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:58:37.459446 | orchestrator | 2025-05-23 01:58:37 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:58:40.507867 | orchestrator | 2025-05-23 01:58:40 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:58:40.508003 | orchestrator | 2025-05-23 01:58:40 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:58:43.561111 | orchestrator | 2025-05-23 01:58:43 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:58:43.561220 | orchestrator | 2025-05-23 01:58:43 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:58:46.611564 | orchestrator | 2025-05-23 01:58:46 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:58:46.611674 | orchestrator | 2025-05-23 01:58:46 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:58:49.662772 | orchestrator | 2025-05-23 01:58:49 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:58:49.662883 | orchestrator | 2025-05-23 01:58:49 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:58:52.712288 | orchestrator | 2025-05-23 01:58:52 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:58:52.712418 | orchestrator | 2025-05-23 01:58:52 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:58:55.762186 | orchestrator | 2025-05-23 01:58:55 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:58:55.762323 | orchestrator | 2025-05-23 01:58:55 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:58:58.809711 | orchestrator | 2025-05-23 01:58:58 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:58:58.809824 | orchestrator | 2025-05-23 01:58:58 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:59:01.849797 | orchestrator | 2025-05-23 01:59:01 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:59:01.849900 | orchestrator | 2025-05-23 01:59:01 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:59:04.904540 | orchestrator | 2025-05-23 01:59:04 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:59:04.904649 | orchestrator | 2025-05-23 01:59:04 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:59:07.961257 | orchestrator | 2025-05-23 01:59:07 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:59:07.961360 | orchestrator | 2025-05-23 01:59:07 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:59:11.013345 | orchestrator | 2025-05-23 01:59:11 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:59:11.013510 | orchestrator | 2025-05-23 01:59:11 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:59:14.061750 | orchestrator | 2025-05-23 01:59:14 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:59:14.061854 | orchestrator | 2025-05-23 01:59:14 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:59:17.109130 | orchestrator | 2025-05-23 01:59:17 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:59:17.109236 | orchestrator | 2025-05-23 01:59:17 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:59:20.164015 | orchestrator | 2025-05-23 01:59:20 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:59:20.164118 | orchestrator | 2025-05-23 01:59:20 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:59:23.208320 | orchestrator | 2025-05-23 01:59:23 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:59:23.208460 | orchestrator | 2025-05-23 01:59:23 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:59:26.252235 | orchestrator | 2025-05-23 01:59:26 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:59:26.252342 | orchestrator | 2025-05-23 01:59:26 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:59:29.296685 | orchestrator | 2025-05-23 01:59:29 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:59:29.296779 | orchestrator | 2025-05-23 01:59:29 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:59:32.340135 | orchestrator | 2025-05-23 01:59:32 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:59:32.340238 | orchestrator | 2025-05-23 01:59:32 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:59:35.400023 | orchestrator | 2025-05-23 01:59:35 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:59:35.400122 | orchestrator | 2025-05-23 01:59:35 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:59:38.452721 | orchestrator | 2025-05-23 01:59:38 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:59:38.452820 | orchestrator | 2025-05-23 01:59:38 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:59:41.501004 | orchestrator | 2025-05-23 01:59:41 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:59:41.501142 | orchestrator | 2025-05-23 01:59:41 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:59:44.546490 | orchestrator | 2025-05-23 01:59:44 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:59:44.546591 | orchestrator | 2025-05-23 01:59:44 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:59:47.598490 | orchestrator | 2025-05-23 01:59:47 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:59:47.598593 | orchestrator | 2025-05-23 01:59:47 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:59:50.652943 | orchestrator | 2025-05-23 01:59:50 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:59:50.653059 | orchestrator | 2025-05-23 01:59:50 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:59:53.703771 | orchestrator | 2025-05-23 01:59:53 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:59:53.703887 | orchestrator | 2025-05-23 01:59:53 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:59:56.755496 | orchestrator | 2025-05-23 01:59:56 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:59:56.755596 | orchestrator | 2025-05-23 01:59:56 | INFO  | Wait 1 second(s) until the next check 2025-05-23 01:59:59.801263 | orchestrator | 2025-05-23 01:59:59 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 01:59:59.801370 | orchestrator | 2025-05-23 01:59:59 | INFO  | Wait 1 second(s) until the next check 2025-05-23 02:00:02.849699 | orchestrator | 2025-05-23 02:00:02 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 02:00:02.849816 | orchestrator | 2025-05-23 02:00:02 | INFO  | Wait 1 second(s) until the next check 2025-05-23 02:00:05.901690 | orchestrator | 2025-05-23 02:00:05 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 02:00:05.901800 | orchestrator | 2025-05-23 02:00:05 | INFO  | Wait 1 second(s) until the next check 2025-05-23 02:00:08.952375 | orchestrator | 2025-05-23 02:00:08 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 02:00:08.952567 | orchestrator | 2025-05-23 02:00:08 | INFO  | Wait 1 second(s) until the next check 2025-05-23 02:00:11.999352 | orchestrator | 2025-05-23 02:00:11 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 02:00:11.999492 | orchestrator | 2025-05-23 02:00:11 | INFO  | Wait 1 second(s) until the next check 2025-05-23 02:00:15.043522 | orchestrator | 2025-05-23 02:00:15 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 02:00:15.043624 | orchestrator | 2025-05-23 02:00:15 | INFO  | Wait 1 second(s) until the next check 2025-05-23 02:00:18.093769 | orchestrator | 2025-05-23 02:00:18 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 02:00:18.093879 | orchestrator | 2025-05-23 02:00:18 | INFO  | Wait 1 second(s) until the next check 2025-05-23 02:00:21.137909 | orchestrator | 2025-05-23 02:00:21 | INFO  | Task eee81a36-e0fa-4360-a4d6-6ece23412765 is in state STARTED 2025-05-23 02:00:21.138072 | orchestrator | 2025-05-23 02:00:21 | INFO  | Wait 1 second(s) until the next check 2025-05-23 02:00:22.143085 | RUN END RESULT_TIMED_OUT: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-05-23 02:00:22.145968 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-05-23 02:00:23.030661 | 2025-05-23 02:00:23.030857 | PLAY [Post output play] 2025-05-23 02:00:23.048961 | 2025-05-23 02:00:23.049128 | LOOP [stage-output : Register sources] 2025-05-23 02:00:23.122988 | 2025-05-23 02:00:23.123388 | TASK [stage-output : Check sudo] 2025-05-23 02:00:23.997236 | orchestrator | sudo: a password is required 2025-05-23 02:00:24.170235 | orchestrator | ok: Runtime: 0:00:00.015093 2025-05-23 02:00:24.187364 | 2025-05-23 02:00:24.187543 | LOOP [stage-output : Set source and destination for files and folders] 2025-05-23 02:00:24.225252 | 2025-05-23 02:00:24.225555 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-05-23 02:00:24.313978 | orchestrator | ok 2025-05-23 02:00:24.323140 | 2025-05-23 02:00:24.323290 | LOOP [stage-output : Ensure target folders exist] 2025-05-23 02:00:24.810133 | orchestrator | ok: "docs" 2025-05-23 02:00:24.810486 | 2025-05-23 02:00:25.059329 | orchestrator | ok: "artifacts" 2025-05-23 02:00:25.308526 | orchestrator | ok: "logs" 2025-05-23 02:00:25.329740 | 2025-05-23 02:00:25.329974 | LOOP [stage-output : Copy files and folders to staging folder] 2025-05-23 02:00:25.368850 | 2025-05-23 02:00:25.369169 | TASK [stage-output : Make all log files readable] 2025-05-23 02:00:25.664179 | orchestrator | ok 2025-05-23 02:00:25.674093 | 2025-05-23 02:00:25.674253 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-05-23 02:00:25.710015 | orchestrator | skipping: Conditional result was False 2025-05-23 02:00:25.729371 | 2025-05-23 02:00:25.729587 | TASK [stage-output : Discover log files for compression] 2025-05-23 02:00:25.756092 | orchestrator | skipping: Conditional result was False 2025-05-23 02:00:25.772094 | 2025-05-23 02:00:25.772273 | LOOP [stage-output : Archive everything from logs] 2025-05-23 02:00:25.821598 | 2025-05-23 02:00:25.821793 | PLAY [Post cleanup play] 2025-05-23 02:00:25.830668 | 2025-05-23 02:00:25.830793 | TASK [Set cloud fact (Zuul deployment)] 2025-05-23 02:00:25.899867 | orchestrator | ok 2025-05-23 02:00:25.911698 | 2025-05-23 02:00:25.911972 | TASK [Set cloud fact (local deployment)] 2025-05-23 02:00:25.947155 | orchestrator | skipping: Conditional result was False 2025-05-23 02:00:25.962871 | 2025-05-23 02:00:25.963043 | TASK [Clean the cloud environment] 2025-05-23 02:00:26.570568 | orchestrator | 2025-05-23 02:00:26 - clean up servers 2025-05-23 02:00:27.406293 | orchestrator | 2025-05-23 02:00:27 - testbed-manager 2025-05-23 02:00:27.489378 | orchestrator | 2025-05-23 02:00:27 - testbed-node-4 2025-05-23 02:00:27.580620 | orchestrator | 2025-05-23 02:00:27 - testbed-node-3 2025-05-23 02:00:27.673066 | orchestrator | 2025-05-23 02:00:27 - testbed-node-5 2025-05-23 02:00:27.772996 | orchestrator | 2025-05-23 02:00:27 - testbed-node-0 2025-05-23 02:00:27.881253 | orchestrator | 2025-05-23 02:00:27 - testbed-node-1 2025-05-23 02:00:27.976295 | orchestrator | 2025-05-23 02:00:27 - testbed-node-2 2025-05-23 02:00:28.083437 | orchestrator | 2025-05-23 02:00:28 - clean up keypairs 2025-05-23 02:00:28.102552 | orchestrator | 2025-05-23 02:00:28 - testbed 2025-05-23 02:00:28.130397 | orchestrator | 2025-05-23 02:00:28 - wait for servers to be gone 2025-05-23 02:00:36.948231 | orchestrator | 2025-05-23 02:00:36 - clean up ports 2025-05-23 02:00:37.135134 | orchestrator | 2025-05-23 02:00:37 - 2854d693-b0b6-4446-9bfa-c4975a5f9a81 2025-05-23 02:00:37.480851 | orchestrator | 2025-05-23 02:00:37 - 3d18babb-1686-4a85-b6ba-623579a67cc3 2025-05-23 02:00:37.818636 | orchestrator | 2025-05-23 02:00:37 - 45d15a3c-d08c-4636-ac45-aade6cbb71e5 2025-05-23 02:00:38.331127 | orchestrator | 2025-05-23 02:00:38 - 46e7d142-2f74-4b01-9518-ed38e7a4d20e 2025-05-23 02:00:38.550169 | orchestrator | 2025-05-23 02:00:38 - 69414a6e-6211-40b6-8e89-516eef41eab8 2025-05-23 02:00:38.747564 | orchestrator | 2025-05-23 02:00:38 - e2673372-05be-4189-8cdf-04a4bf42df29 2025-05-23 02:00:38.952055 | orchestrator | 2025-05-23 02:00:38 - f381ecb5-9dea-4d17-84e1-c416e48b00d8 2025-05-23 02:00:39.186786 | orchestrator | 2025-05-23 02:00:39 - clean up volumes 2025-05-23 02:00:39.303232 | orchestrator | 2025-05-23 02:00:39 - testbed-volume-5-node-base 2025-05-23 02:00:39.342291 | orchestrator | 2025-05-23 02:00:39 - testbed-volume-1-node-base 2025-05-23 02:00:39.381510 | orchestrator | 2025-05-23 02:00:39 - testbed-volume-4-node-base 2025-05-23 02:00:39.428847 | orchestrator | 2025-05-23 02:00:39 - testbed-volume-manager-base 2025-05-23 02:00:39.473367 | orchestrator | 2025-05-23 02:00:39 - testbed-volume-0-node-base 2025-05-23 02:00:39.513774 | orchestrator | 2025-05-23 02:00:39 - testbed-volume-2-node-base 2025-05-23 02:00:39.560865 | orchestrator | 2025-05-23 02:00:39 - testbed-volume-3-node-base 2025-05-23 02:00:39.610345 | orchestrator | 2025-05-23 02:00:39 - testbed-volume-5-node-5 2025-05-23 02:00:39.649667 | orchestrator | 2025-05-23 02:00:39 - testbed-volume-3-node-3 2025-05-23 02:00:39.695180 | orchestrator | 2025-05-23 02:00:39 - testbed-volume-0-node-3 2025-05-23 02:00:39.732280 | orchestrator | 2025-05-23 02:00:39 - testbed-volume-6-node-3 2025-05-23 02:00:39.773095 | orchestrator | 2025-05-23 02:00:39 - testbed-volume-2-node-5 2025-05-23 02:00:39.814119 | orchestrator | 2025-05-23 02:00:39 - testbed-volume-1-node-4 2025-05-23 02:00:39.854951 | orchestrator | 2025-05-23 02:00:39 - testbed-volume-4-node-4 2025-05-23 02:00:39.894188 | orchestrator | 2025-05-23 02:00:39 - testbed-volume-7-node-4 2025-05-23 02:00:39.937709 | orchestrator | 2025-05-23 02:00:39 - testbed-volume-8-node-5 2025-05-23 02:00:39.981588 | orchestrator | 2025-05-23 02:00:39 - disconnect routers 2025-05-23 02:00:40.152974 | orchestrator | 2025-05-23 02:00:40 - testbed 2025-05-23 02:00:41.475699 | orchestrator | 2025-05-23 02:00:41 - clean up subnets 2025-05-23 02:00:41.530762 | orchestrator | 2025-05-23 02:00:41 - subnet-testbed-management 2025-05-23 02:00:41.705942 | orchestrator | 2025-05-23 02:00:41 - clean up networks 2025-05-23 02:00:41.837378 | orchestrator | 2025-05-23 02:00:41 - net-testbed-management 2025-05-23 02:00:42.192938 | orchestrator | 2025-05-23 02:00:42 - clean up security groups 2025-05-23 02:00:42.235242 | orchestrator | 2025-05-23 02:00:42 - testbed-node 2025-05-23 02:00:42.355115 | orchestrator | 2025-05-23 02:00:42 - testbed-management 2025-05-23 02:00:42.496135 | orchestrator | 2025-05-23 02:00:42 - clean up floating ips 2025-05-23 02:00:42.528745 | orchestrator | 2025-05-23 02:00:42 - 81.163.193.13 2025-05-23 02:00:42.880133 | orchestrator | 2025-05-23 02:00:42 - clean up routers 2025-05-23 02:00:42.982620 | orchestrator | 2025-05-23 02:00:42 - testbed 2025-05-23 02:00:44.517991 | orchestrator | ok: Runtime: 0:00:18.021958 2025-05-23 02:00:44.523696 | 2025-05-23 02:00:44.523816 | PLAY RECAP 2025-05-23 02:00:44.523878 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-05-23 02:00:44.523920 | 2025-05-23 02:00:44.676750 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-05-23 02:00:44.679383 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-05-23 02:00:45.479537 | 2025-05-23 02:00:45.479709 | PLAY [Cleanup play] 2025-05-23 02:00:45.496535 | 2025-05-23 02:00:45.496686 | TASK [Set cloud fact (Zuul deployment)] 2025-05-23 02:00:45.550559 | orchestrator | ok 2025-05-23 02:00:45.562466 | 2025-05-23 02:00:45.562645 | TASK [Set cloud fact (local deployment)] 2025-05-23 02:00:45.587241 | orchestrator | skipping: Conditional result was False 2025-05-23 02:00:45.603699 | 2025-05-23 02:00:45.603877 | TASK [Clean the cloud environment] 2025-05-23 02:00:46.787053 | orchestrator | 2025-05-23 02:00:46 - clean up servers 2025-05-23 02:00:47.366517 | orchestrator | 2025-05-23 02:00:47 - clean up keypairs 2025-05-23 02:00:47.384705 | orchestrator | 2025-05-23 02:00:47 - wait for servers to be gone 2025-05-23 02:00:47.431603 | orchestrator | 2025-05-23 02:00:47 - clean up ports 2025-05-23 02:00:47.504690 | orchestrator | 2025-05-23 02:00:47 - clean up volumes 2025-05-23 02:00:47.580163 | orchestrator | 2025-05-23 02:00:47 - disconnect routers 2025-05-23 02:00:47.609996 | orchestrator | 2025-05-23 02:00:47 - clean up subnets 2025-05-23 02:00:47.629037 | orchestrator | 2025-05-23 02:00:47 - clean up networks 2025-05-23 02:00:47.788698 | orchestrator | 2025-05-23 02:00:47 - clean up security groups 2025-05-23 02:00:47.824730 | orchestrator | 2025-05-23 02:00:47 - clean up floating ips 2025-05-23 02:00:47.849997 | orchestrator | 2025-05-23 02:00:47 - clean up routers 2025-05-23 02:00:48.145890 | orchestrator | ok: Runtime: 0:00:01.437970 2025-05-23 02:00:48.149731 | 2025-05-23 02:00:48.149893 | PLAY RECAP 2025-05-23 02:00:48.150068 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-05-23 02:00:48.150129 | 2025-05-23 02:00:48.294755 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-05-23 02:00:48.297086 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-05-23 02:00:49.116660 | 2025-05-23 02:00:49.116857 | PLAY [Base post-fetch] 2025-05-23 02:00:49.133501 | 2025-05-23 02:00:49.133662 | TASK [fetch-output : Set log path for multiple nodes] 2025-05-23 02:00:49.190646 | orchestrator | skipping: Conditional result was False 2025-05-23 02:00:49.205514 | 2025-05-23 02:00:49.205775 | TASK [fetch-output : Set log path for single node] 2025-05-23 02:00:49.267997 | orchestrator | ok 2025-05-23 02:00:49.278457 | 2025-05-23 02:00:49.278628 | LOOP [fetch-output : Ensure local output dirs] 2025-05-23 02:00:49.801604 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/512ad4498ee84ff7bed9a58524b24fdc/work/logs" 2025-05-23 02:00:50.122512 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/512ad4498ee84ff7bed9a58524b24fdc/work/artifacts" 2025-05-23 02:00:50.400141 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/512ad4498ee84ff7bed9a58524b24fdc/work/docs" 2025-05-23 02:00:50.424959 | 2025-05-23 02:00:50.425156 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-05-23 02:00:51.402976 | orchestrator | changed: .d..t...... ./ 2025-05-23 02:00:51.403279 | orchestrator | changed: All items complete 2025-05-23 02:00:51.403325 | 2025-05-23 02:00:52.137308 | orchestrator | changed: .d..t...... ./ 2025-05-23 02:00:52.908562 | orchestrator | changed: .d..t...... ./ 2025-05-23 02:00:52.940396 | 2025-05-23 02:00:52.940558 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-05-23 02:00:52.969513 | orchestrator | skipping: Conditional result was False 2025-05-23 02:00:52.972103 | orchestrator | skipping: Conditional result was False 2025-05-23 02:00:52.999603 | 2025-05-23 02:00:52.999742 | PLAY RECAP 2025-05-23 02:00:52.999831 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-05-23 02:00:52.999874 | 2025-05-23 02:00:53.146553 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-05-23 02:00:53.147582 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-05-23 02:00:53.931285 | 2025-05-23 02:00:53.931481 | PLAY [Base post] 2025-05-23 02:00:53.949750 | 2025-05-23 02:00:53.949909 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-05-23 02:00:55.027250 | orchestrator | changed 2025-05-23 02:00:55.038411 | 2025-05-23 02:00:55.038552 | PLAY RECAP 2025-05-23 02:00:55.038633 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-05-23 02:00:55.038714 | 2025-05-23 02:00:55.172823 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-05-23 02:00:55.176998 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-05-23 02:00:55.990074 | 2025-05-23 02:00:55.990251 | PLAY [Base post-logs] 2025-05-23 02:00:56.001598 | 2025-05-23 02:00:56.001753 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-05-23 02:00:56.589011 | localhost | changed 2025-05-23 02:00:56.604396 | 2025-05-23 02:00:56.604599 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-05-23 02:00:56.631538 | localhost | ok 2025-05-23 02:00:56.636822 | 2025-05-23 02:00:56.636991 | TASK [Set zuul-log-path fact] 2025-05-23 02:00:56.654712 | localhost | ok 2025-05-23 02:00:56.669890 | 2025-05-23 02:00:56.670114 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-05-23 02:00:56.699902 | localhost | ok 2025-05-23 02:00:56.707190 | 2025-05-23 02:00:56.707399 | TASK [upload-logs : Create log directories] 2025-05-23 02:00:57.268967 | localhost | changed 2025-05-23 02:00:57.274721 | 2025-05-23 02:00:57.274958 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-05-23 02:00:57.837726 | localhost -> localhost | ok: Runtime: 0:00:00.008181 2025-05-23 02:00:57.842044 | 2025-05-23 02:00:57.842176 | TASK [upload-logs : Upload logs to log server] 2025-05-23 02:00:58.431892 | localhost | Output suppressed because no_log was given 2025-05-23 02:00:58.435257 | 2025-05-23 02:00:58.435432 | LOOP [upload-logs : Compress console log and json output] 2025-05-23 02:00:58.495539 | localhost | skipping: Conditional result was False 2025-05-23 02:00:58.511050 | localhost | skipping: Conditional result was False 2025-05-23 02:00:58.519733 | 2025-05-23 02:00:58.520056 | LOOP [upload-logs : Upload compressed console log and json output] 2025-05-23 02:00:58.571123 | localhost | skipping: Conditional result was False 2025-05-23 02:00:58.571712 | 2025-05-23 02:00:58.575276 | localhost | skipping: Conditional result was False 2025-05-23 02:00:58.583592 | 2025-05-23 02:00:58.583868 | LOOP [upload-logs : Upload console log and json output]